public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [PATCH 0/3] Add use-after-free memory detection
@ 2018-10-19  1:50 Jian J Wang
  2018-10-19  1:50 ` [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature Jian J Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Jian J Wang @ 2018-10-19  1:50 UTC (permalink / raw)
  To: edk2-devel

UAF (Use-After-Free) memory detection is new feature introduced to
detect illegal access to memory which has been freed.

Jian J Wang (3):
  MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature
  UefiCpuPkg/CpuDxe: fix an infinite loop issue
  MdeModulePkg/Core: add use-after-free memory detection

 MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
 MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
 MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393 ++++++++++++++++++++++++++-
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
 MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
 MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
 MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
 MdeModulePkg/MdeModulePkg.dec                |   6 +
 UefiCpuPkg/CpuDxe/CpuPageTable.c             |  12 +
 10 files changed, 537 insertions(+), 26 deletions(-)

-- 
2.16.2.windows.1



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature
  2018-10-19  1:50 [PATCH 0/3] Add use-after-free memory detection Jian J Wang
@ 2018-10-19  1:50 ` Jian J Wang
  2018-10-19 11:27   ` Laszlo Ersek
  2018-10-22  2:20   ` Zeng, Star
  2018-10-19  1:50 ` [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 16+ messages in thread
From: Jian J Wang @ 2018-10-19  1:50 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

UAF (Use-After-Free) memory detection is new feature introduced to
detect illegal access to memory which has been freed. The principle
behind is similar to heap guard feature, that is we'll turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.

This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, this patch
series add logic put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages freed.

To use this feature, one can simply set following PCD to 1
  gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask

Please note this feature cannot be used with heap guard feature controlled
by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/MdeModulePkg.dec | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
index 6037504fa7..83736cd761 100644
--- a/MdeModulePkg/MdeModulePkg.dec
+++ b/MdeModulePkg/MdeModulePkg.dec
@@ -1029,6 +1029,12 @@
   # @Prompt Enable UEFI Stack Guard.
   gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard|FALSE|BOOLEAN|0x30001055
 
+  ## This mask is to control Use-After-Free Memory Detection behavior.
+  #   BIT0    - Enable Use-After-Free memory detection for UEFI modules.<BR>
+  #   BIT1..7 - Reserved for future uses.<BR>
+  # @Prompt The Use-After-Free Memory Detection feature mask
+  gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask|0x0|UINT8|0x30001056
+
 [PcdsFixedAtBuild, PcdsPatchableInModule]
   ## Dynamic type PCD can be registered callback function for Pcd setting action.
   #  PcdMaxPeiPcdCallBackNumberPerPcdEntry indicates the maximum number of callback function
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue
  2018-10-19  1:50 [PATCH 0/3] Add use-after-free memory detection Jian J Wang
  2018-10-19  1:50 ` [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature Jian J Wang
@ 2018-10-19  1:50 ` Jian J Wang
  2018-10-19 11:45   ` Laszlo Ersek
  2018-10-19  1:50 ` [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection Jian J Wang
  2018-10-19  1:56 ` [PATCH 0/3] Add " Wang, Jian J
  3 siblings, 1 reply; 16+ messages in thread
From: Jian J Wang @ 2018-10-19  1:50 UTC (permalink / raw)
  To: edk2-devel
  Cc: Laszlo Ersek, Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni

The UAF (Use-After-Free) memory detection feature will cause an
infinite calling of InitializePageTablePool(). This is due to a
fact that AllocateAlignedPages() is used to allocate page table
pool memory. This function will most likely call gBS->FreePages
to free unaligned pages and then cause another round of page
attributes change, like below

   FreePages() <===============|
=> SetMemoryAttributes()       |
=> <out of page table>         |
=> InitializePageTablePool()   |
=> AllocateAlignedPages()      |
=> FreePages() ================|

The solution is add a lock in page table pool allocation function
and fail any other requests if it has not been done.

Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 UefiCpuPkg/CpuDxe/CpuPageTable.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTable.c
index 33e8ee2d2c..2145e623fa 100644
--- a/UefiCpuPkg/CpuDxe/CpuPageTable.c
+++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c
@@ -100,6 +100,7 @@ PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
 };
 
 PAGE_TABLE_POOL                   *mPageTablePool = NULL;
+EFI_LOCK                          mPageTablePoolLock = EFI_INITIALIZE_LOCK_VARIABLE (TPL_NOTIFY);
 PAGE_TABLE_LIB_PAGING_CONTEXT     mPagingContext;
 EFI_SMM_BASE2_PROTOCOL            *mSmmBase2 = NULL;
 
@@ -1045,6 +1046,12 @@ InitializePageTablePool (
 {
   VOID                      *Buffer;
   BOOLEAN                   IsModified;
+  EFI_STATUS                Status;
+
+  Status = EfiAcquireLockOrFail (&mPageTablePoolLock);
+  if (EFI_ERROR (Status)) {
+    return FALSE;
+  }
 
   //
   // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one page for
@@ -1056,7 +1063,10 @@ InitializePageTablePool (
   Buffer = AllocateAlignedPages (PoolPages, PAGE_TABLE_POOL_ALIGNMENT);
   if (Buffer == NULL) {
     DEBUG ((DEBUG_ERROR, "ERROR: Out of aligned pages\r\n"));
+    EfiReleaseLock (&mPageTablePoolLock);
     return FALSE;
+  } else {
+    DEBUG ((DEBUG_INFO, "Paging: added %d pages to page table pool\r\n", PoolPages));
   }
 
   //
@@ -1092,6 +1102,8 @@ InitializePageTablePool (
     );
   ASSERT (IsModified == TRUE);
 
+  EfiReleaseLock (&mPageTablePoolLock);
+
   return TRUE;
 }
 
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
  2018-10-19  1:50 [PATCH 0/3] Add use-after-free memory detection Jian J Wang
  2018-10-19  1:50 ` [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature Jian J Wang
  2018-10-19  1:50 ` [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
@ 2018-10-19  1:50 ` Jian J Wang
  2018-10-19 12:04   ` Laszlo Ersek
  2018-10-22  2:53   ` Zeng, Star
  2018-10-19  1:56 ` [PATCH 0/3] Add " Wang, Jian J
  3 siblings, 2 replies; 16+ messages in thread
From: Jian J Wang @ 2018-10-19  1:50 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

UAF (Use-After-Free) memory detection is new feature introduced to
detect illegal access to memory which has been freed. The principle
behind is similar to heap guard feature, that is the core turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.

This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.

To use this feature, one can simply set following PCD to 1
  gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask

Please note this feature cannot be used with heap guard feature controlled
by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
 MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
 MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393 ++++++++++++++++++++++++++-
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
 MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
 MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
 MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
 8 files changed, 519 insertions(+), 26 deletions(-)

diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h b/MdeModulePkg/Core/Dxe/DxeMain.h
index 2dec9da5e3..ae75cc5b25 100644
--- a/MdeModulePkg/Core/Dxe/DxeMain.h
+++ b/MdeModulePkg/Core/Dxe/DxeMain.h
@@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 #include <Library/DxeServicesLib.h>
 #include <Library/DebugAgentLib.h>
 #include <Library/CpuExceptionHandlerLib.h>
+#include <Library/DebugPrintErrorLevelLib.h>
 
 
 //
diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf b/MdeModulePkg/Core/Dxe/DxeMain.inf
index 10375443c0..d91258c087 100644
--- a/MdeModulePkg/Core/Dxe/DxeMain.inf
+++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
@@ -198,6 +198,7 @@
   gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType                       ## CONSUMES
   gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask                   ## CONSUMES
   gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ## CONSUMES
+  gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask       ## CONSUMES
 
 # [Hob]
 # RESOURCE_DESCRIPTOR   ## CONSUMES
diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
index d9c65a8045..e43ec74010 100644
--- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
+++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
@@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
     EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
     UINTN                            Index;
 
+    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
+      return;
+    }
+
     Status = CoreGetMemorySpaceMap (&NumberOfDescriptors, &MemorySpaceMap);
     ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
 
@@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
     EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
     UINTN                        Index;
 
+    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
+      return;
+    }
+
     Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
     ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
 
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
index 663f969c0d..b120c04f8f 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
@@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
 GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
                                     = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
 
+//
+// Used for Use-After-Free memory detection to promote freed but not used pages.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS mLastPromotedPage = BASE_4GB;
+
 /**
   Set corresponding bits in bitmap table to 1 according to the address.
 
@@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
 
   @return An integer containing the guarded memory bitmap.
 **/
-UINTN
+UINT64
 GetGuardedMemoryBits (
   IN EFI_PHYSICAL_ADDRESS    Address,
   IN UINTN                   NumberOfPages
@@ -387,7 +392,7 @@ GetGuardedMemoryBits (
 {
   UINT64            *BitMap;
   UINTN             Bits;
-  UINTN             Result;
+  UINT64            Result;
   UINTN             Shift;
   UINTN             BitsToUnitEnd;
 
@@ -1203,6 +1208,373 @@ SetAllGuardPages (
   }
 }
 
+/**
+  Check to see if the Use-After-Free (UAF) feature is enabled or not.
+
+  @return TRUE/FALSE.
+**/
+BOOLEAN
+IsUafEnabled (
+  VOID
+  )
+{
+  ASSERT (!IsHeapGuardEnabled());
+  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
+}
+
+/**
+  Find the address of top-most guarded free page.
+
+  @param[out]  Address    Start address of top-most guarded free page.
+
+  @return VOID.
+**/
+VOID
+GetLastGuardedFreePageAddress (
+  OUT EFI_PHYSICAL_ADDRESS      *Address
+  )
+{
+  EFI_PHYSICAL_ADDRESS    AddressGranularity;
+  EFI_PHYSICAL_ADDRESS    BaseAddress;
+  UINTN                   Level;
+  UINT64                  Map;
+  INTN                    Index;
+
+  ASSERT (mMapLevel >= 1);
+
+  BaseAddress = 0;
+  Map = mGuardedMemoryMap;
+  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
+       ++Level) {
+    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
+
+    //
+    // Find the non-NULL entry at largest index.
+    //
+    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
+      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
+        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
+        Map = ((UINT64 *)(UINTN)Map)[Index];
+        break;
+      }
+    }
+  }
+
+  //
+  // Find the non-zero MSB then get the page address.
+  //
+  while (Map != 0) {
+    Map = RShiftU64 (Map, 1);
+    BaseAddress += EFI_PAGES_TO_SIZE (1);
+  }
+
+  *Address = BaseAddress;
+}
+
+/**
+  Record freed pages.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+MarkFreedPages (
+  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
+  IN UINTN                    Pages
+  )
+{
+  SetGuardedMemoryBits (BaseAddress, Pages);
+}
+
+/**
+  Record freed pages as well as mark them as not-present, if possible.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPages (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  )
+{
+  EFI_STATUS      Status;
+
+  //
+  // Do nothing if
+  // - Use-After-Free detection is not enabled
+  // - Freed memory is legacy memory lower than 1MB
+  //
+  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
+    return;
+  }
+
+  MarkFreedPages (BaseAddress, Pages);
+  if (gCpu != NULL) {
+    //
+    // Set flag to make sure allocating memory without GUARD for page table
+    // operation; otherwise infinite loops could be caused.
+    //
+    mOnGuarding = TRUE;
+    //
+    // Note: This might overwrite other attributes needed by other features,
+    // such as NX memory protection.
+    //
+    Status = gCpu->SetMemoryAttributes (
+                     gCpu,
+                     BaseAddress,
+                     EFI_PAGES_TO_SIZE(Pages),
+                     EFI_MEMORY_RP
+                     );
+    //
+    // Normally we should check the returned Status. But there might be memory
+    // alloc/free involved in SetMemoryAttributes(), which would fail this
+    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
+    //
+    if (EFI_ERROR (Status)) {
+      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n", BaseAddress, Pages));
+    }
+    mOnGuarding = FALSE;
+  }
+}
+
+/**
+  Mark all pages freed before CPU Arch Protocol as not-present.
+
+**/
+VOID
+GuardAllFreedPages (
+  VOID
+  )
+{
+  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    TableEntry;
+  UINT64    Address;
+  UINT64    GuardPage;
+  INTN      Level;
+  UINTN     BitIndex;
+  UINTN     GuardPageNumber;
+
+  if (mGuardedMemoryMap == 0 ||
+      mMapLevel == 0 ||
+      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
+    return;
+  }
+
+  CopyMem (Entries, mLevelMask, sizeof (Entries));
+  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
+
+  SetMem (Tables, sizeof(Tables), 0);
+  SetMem (Addresses, sizeof(Addresses), 0);
+  SetMem (Indices, sizeof(Indices), 0);
+
+  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+  Tables[Level]   = mGuardedMemoryMap;
+  Address         = 0;
+  GuardPage       = (UINT64)-1;
+  GuardPageNumber = 0;
+
+  while (TRUE) {
+    if (Indices[Level] > Entries[Level]) {
+      Tables[Level] = 0;
+      Level        -= 1;
+    } else {
+      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
+      Address     = Addresses[Level];
+
+      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+        Level            += 1;
+        Tables[Level]     = TableEntry;
+        Addresses[Level]  = Address;
+        Indices[Level]    = 0;
+
+        continue;
+      } else {
+        BitIndex = 1;
+        while (BitIndex != 0) {
+          if ((TableEntry & BitIndex) != 0) {
+            if (GuardPage == (UINT64)-1) {
+              GuardPage = Address;
+            }
+            ++GuardPageNumber;
+          } else if (GuardPageNumber > 0) {
+            GuardFreedPages (GuardPage, GuardPageNumber);
+            GuardPageNumber = 0;
+            GuardPage       = (UINT64)-1;
+          }
+
+          if (TableEntry == 0) {
+            break;
+          }
+
+          Address += EFI_PAGES_TO_SIZE (1);
+          BitIndex = LShiftU64(BitIndex, 1);
+        }
+      }
+    }
+
+    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
+      break;
+    }
+
+    Indices[Level] += 1;
+    Address = (Level == 0) ? 0 : Addresses[Level - 1];
+    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
+
+  }
+
+  //
+  // Update the maximum address of freed page which can be used for memory
+  // promotion upon out-of-memory-space.
+  //
+  GetLastGuardedFreePageAddress (&Address);
+  if (Address != 0) {
+    mLastPromotedPage = Address;
+  }
+}
+
+/**
+  This function checks to see if the given memory map descriptor in a memory map
+  can be merged with any guarded free pages.
+
+  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
+  @param  MaxAddress        Maximum address to stop the merge.
+
+  @return VOID
+
+**/
+VOID
+MergeGuardPages (
+  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
+  IN EFI_PHYSICAL_ADDRESS       MaxAddress
+  )
+{
+  EFI_PHYSICAL_ADDRESS        EndAddress;
+  UINT64                      Bitmap;
+  INTN                        Pages;
+
+  if (!IsUafEnabled () ||
+      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
+    return;
+  }
+
+  Bitmap = 0;
+  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry->PhysicalStart);
+  Pages -= MemoryMapEntry->NumberOfPages;
+  while (Pages > 0) {
+    if (Bitmap == 0) {
+      EndAddress = MemoryMapEntry->PhysicalStart +
+                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
+      Bitmap = GetGuardedMemoryBits(EndAddress, GUARDED_HEAP_MAP_ENTRY_BITS);
+    }
+
+    if ((Bitmap & 1) == 0) {
+      break;
+    }
+
+    Pages--;
+    MemoryMapEntry->NumberOfPages++;
+    Bitmap = RShiftU64 (Bitmap, 1);
+  }
+}
+
+/**
+  Put part (at most 64 pages a time) guarded free pages back to free page pool.
+
+  Use-After-Free detection makes use of 'Used then throw away' way to detect
+  any illegal access to freed memory. The thrown-away memory will be marked as
+  not-present so that any access to those memory (after free) will trigger
+  page-fault.
+
+  The problem is that this will consume lots of memory space. Once no memory
+  left in pool to allocate, we have to restore part of the freed pages to their
+  normal function. Otherwise the whole system will stop functioning.
+
+  @param  StartAddress    Start address of promoted memory.
+  @param  EndAddress      End address of promoted memory.
+
+  @return TRUE    Succeeded to promote memory.
+  @return FALSE   No free memory found.
+
+**/
+BOOLEAN
+PromoteGuardedFreePages (
+  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
+  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
+  )
+{
+  EFI_STATUS              Status;
+  UINTN                   AvailablePages;
+  UINT64                  Bitmap;
+  EFI_PHYSICAL_ADDRESS    Start;
+
+  if (!IsUafEnabled ()) {
+    return FALSE;
+  }
+
+  //
+  // Similar to memory allocation service, always search the freed pages in
+  // descending direction.
+  //
+  Start           = mLastPromotedPage;
+  AvailablePages  = 0;
+  while (AvailablePages == 0) {
+    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
+    //
+    // If the address wraps around, try the really freed pages at top.
+    //
+    if (Start > mLastPromotedPage) {
+      GetLastGuardedFreePageAddress (&Start);
+      ASSERT (Start != 0);
+      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
+    }
+
+    Bitmap = GetGuardedMemoryBits(Start, GUARDED_HEAP_MAP_ENTRY_BITS);
+    while (Bitmap > 0) {
+      if ((Bitmap & 1) != 0) {
+        ++AvailablePages;
+      } else if (AvailablePages == 0) {
+        Start += EFI_PAGES_TO_SIZE (1);
+      } else {
+        break;
+      }
+
+      Bitmap = RShiftU64(Bitmap, 1);
+    }
+  }
+
+  if (AvailablePages) {
+    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start, AvailablePages));
+    if (gCpu != NULL) {
+      //
+      // Set flag to make sure allocating memory without GUARD for page table
+      // operation; otherwise infinite loops could be caused.
+      //
+      mOnGuarding = TRUE;
+      Status = gCpu->SetMemoryAttributes (gCpu, Start, EFI_PAGES_TO_SIZE(AvailablePages), 0);
+      ASSERT_EFI_ERROR (Status);
+      mOnGuarding = FALSE;
+    }
+
+    mLastPromotedPage = Start;
+    *StartAddress     = Start;
+    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
+    return TRUE;
+  }
+
+  return FALSE;
+}
+
 /**
   Notify function used to set all Guard pages before CPU Arch Protocol installed.
 **/
@@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
   )
 {
   ASSERT (gCpu != NULL);
-  SetAllGuardPages ();
+
+  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
+    SetAllGuardPages ();
+  }
+
+  if (IsUafEnabled ()) {
+    GuardAllFreedPages ();
+  }
 }
 
 /**
@@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
   CHAR8     *Ruler1;
   CHAR8     *Ruler2;
 
+  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
+    return;
+  }
+
+  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
+    return;
+  }
+
   if (mGuardedMemoryMap == 0 ||
       mMapLevel == 0 ||
       mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
index 8c34692439..3f2a5283c2 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
@@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
   VOID
   );
 
+/**
+  This function checks to see if the given memory map descriptor in a memory map
+  can be merged with any guarded free pages.
+
+  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
+  @param  MaxAddress        Maximum address to stop the merge.
+
+  @return VOID
+
+**/
+VOID
+MergeGuardPages (
+  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
+  IN EFI_PHYSICAL_ADDRESS       MaxAddress
+  );
+
+/**
+  Record freed pages as well as mark them as not-present, if possible.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPages (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  );
+
+/**
+  Check to see if the use-after-free (UAF) feature is enabled or not.
+
+  @return TRUE/FALSE.
+**/
+BOOLEAN
+IsUafEnabled (
+  VOID
+  );
+
+/**
+  Put part (at most 64 pages a time) guarded free pages back to free page pool.
+
+  Use-After-Free detection makes use of 'Used then throw away' way to detect
+  any illegal access to freed memory. The thrown-away memory will be marked as
+  not-present so that any access to those memory (after free) will trigger
+  page-fault.
+
+  The problem is that this will consume lots of memory space. Once no memory
+  left in pool to allocate, we have to restore part of the freed pages to their
+  normal function. Otherwise the whole system will stop functioning.
+
+  @param  StartAddress    Start address of promoted memory.
+  @param  EndAddress      End address of promoted memory.
+
+  @return TRUE    Succeeded to promote memory.
+  @return FALSE   No free memory found.
+
+**/
+BOOLEAN
+PromoteGuardedFreePages (
+  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
+  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
+  );
+
 extern BOOLEAN mOnGuarding;
 
 #endif
diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c
index 3b4cc08e7c..9f4419fa5b 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Page.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
@@ -401,9 +401,11 @@ PromoteMemoryResource (
   VOID
   )
 {
-  LIST_ENTRY         *Link;
-  EFI_GCD_MAP_ENTRY  *Entry;
-  BOOLEAN            Promoted;
+  LIST_ENTRY            *Link;
+  EFI_GCD_MAP_ENTRY     *Entry;
+  BOOLEAN               Promoted;
+  EFI_PHYSICAL_ADDRESS  StartAddress;
+  EFI_PHYSICAL_ADDRESS  EndAddress;
 
   DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
 
@@ -451,6 +453,24 @@ PromoteMemoryResource (
 
   CoreReleaseGcdMemoryLock ();
 
+  if (!Promoted) {
+    //
+    // If Use-After-Free detection is enabled, we might promote pages from
+    // guarded free pages.
+    //
+    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
+    if (Promoted) {
+      CoreAcquireGcdMemoryLock ();
+      CoreAddRange (
+        EfiConventionalMemory,
+        StartAddress,
+        EndAddress,
+        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT | EFI_MEMORY_WB
+        );
+      CoreReleaseGcdMemoryLock ();
+    }
+  }
+
   return Promoted;
 }
 /**
@@ -896,9 +916,13 @@ CoreConvertPagesEx (
     }
 
     //
-    // Add our new range in
+    // Add our new range in. Don't do this for freed pages if Use-After-Free
+    // detection is enabled.
     //
-    CoreAddRange (MemType, Start, RangeEnd, Attribute);
+    if (!IsUafEnabled () || !ChangingType || MemType != EfiConventionalMemory) {
+      CoreAddRange (MemType, Start, RangeEnd, Attribute);
+    }
+
     if (ChangingType && (MemType == EfiConventionalMemory)) {
       //
       // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
@@ -1514,6 +1538,7 @@ CoreFreePages (
 
   Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
   if (!EFI_ERROR (Status)) {
+    GuardFreedPages (Memory, NumberOfPages);
     CoreUpdateProfile (
       (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
       MemoryProfileActionFreePages,
@@ -1908,9 +1933,7 @@ Done:
   *MemoryMapSize = BufferSize;
 
   DEBUG_CODE (
-    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
-      DumpGuardedMemoryBitmap ();
-    }
+    DumpGuardedMemoryBitmap ();
   );
 
   return Status;
diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c
index 1ff2061f7f..0cd74267b3 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
@@ -26,7 +26,8 @@ typedef struct {
 } POOL_FREE;
 
 
-#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
+#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
+#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
 typedef struct {
   UINT32          Signature;
   UINT32          Reserved;
@@ -367,6 +368,7 @@ CoreAllocatePoolI (
   UINTN       NoPages;
   UINTN       Granularity;
   BOOLEAN     HasPoolTail;
+  BOOLEAN     PageAsPool;
 
   ASSERT_LOCKED (&mPoolMemoryLock);
 
@@ -386,6 +388,7 @@ CoreAllocatePoolI (
 
   HasPoolTail  = !(NeedGuard &&
                    ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+  PageAsPool = (IsUafEnabled () && !mOnGuarding);
 
   //
   // Adjusting the Size to be of proper alignment so that
@@ -406,7 +409,7 @@ CoreAllocatePoolI (
   // If allocation is over max size, just allocate pages for the request
   // (slow)
   //
-  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
+  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
     if (!HasPoolTail) {
       Size -= sizeof (POOL_TAIL);
     }
@@ -498,7 +501,7 @@ Done:
     //
     // If we have a pool buffer, fill in the header & tail info
     //
-    Head->Signature = POOL_HEAD_SIGNATURE;
+    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE : POOL_HEAD_SIGNATURE;
     Head->Size      = Size;
     Head->Type      = (EFI_MEMORY_TYPE) PoolType;
     Buffer          = Head->Data;
@@ -615,6 +618,7 @@ CoreFreePoolPagesI (
   CoreFreePoolPages (Memory, NoPages);
   CoreReleaseMemoryLock ();
 
+  GuardFreedPages (Memory, NoPages);
   ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
     (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
 }
@@ -685,15 +689,19 @@ CoreFreePoolI (
   UINTN       Granularity;
   BOOLEAN     IsGuarded;
   BOOLEAN     HasPoolTail;
+  BOOLEAN     PageAsPool;
 
   ASSERT(Buffer != NULL);
   //
   // Get the head & tail of the pool entry
   //
-  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
+  Head = BASE_CR (Buffer, POOL_HEAD, Data);
   ASSERT(Head != NULL);
 
-  if (Head->Signature != POOL_HEAD_SIGNATURE) {
+  if (Head->Signature != POOL_HEAD_SIGNATURE &&
+      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
+    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
+            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
     return EFI_INVALID_PARAMETER;
   }
 
@@ -701,6 +709,7 @@ CoreFreePoolI (
                 IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
   HasPoolTail = !(IsGuarded &&
                   ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
 
   if (HasPoolTail) {
     Tail = HEAD_TO_TAIL (Head);
@@ -757,7 +766,7 @@ CoreFreePoolI (
   //
   // If it's not on the list, it must be pool pages
   //
-  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
+  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
 
     //
     // Return the memory pages back to free memory
diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
index 05eb4f422b..f6595c90ed 100644
--- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
+++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
@@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 #include <Guid/PropertiesTable.h>
 
 #include "DxeMain.h"
+#include "HeapGuard.h"
 
 #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
   ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
@@ -205,16 +206,13 @@ MergeMemoryMap (
     NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
 
     do {
-      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry->NumberOfPages));
+      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry->PhysicalStart);
+      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry->NumberOfPages));
       if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
-          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
-          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
-          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
-        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
-        if (NewMemoryMapEntry != MemoryMapEntry) {
-          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
-        }
-
+          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
+          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
+          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
+        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
         NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (NextMemoryMapEntry, DescriptorSize);
         continue;
       } else {
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/3] Add use-after-free memory detection
  2018-10-19  1:50 [PATCH 0/3] Add use-after-free memory detection Jian J Wang
                   ` (2 preceding siblings ...)
  2018-10-19  1:50 ` [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection Jian J Wang
@ 2018-10-19  1:56 ` Wang, Jian J
  3 siblings, 0 replies; 16+ messages in thread
From: Wang, Jian J @ 2018-10-19  1:56 UTC (permalink / raw)
  To: edk2-devel, edk2-devel@lists.01.org

REF: https://bugzilla.tianocore.org/show_bug.cgi?id=1240

Tests:
a. Access freed pages in shell to see if it triggers #PF
b. Use debugger to verify page table attributes matching freed pages
c. Exhaust memory space to see if the BIOS still works

Regards,
Jian


> -----Original Message-----
> From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org]
> Sent: Friday, October 19, 2018 9:50 AM
> To: edk2-devel@lists.01.org
> Subject: [edk2] [PATCH 0/3] Add use-after-free memory detection
> 
> UAF (Use-After-Free) memory detection is new feature introduced to
> detect illegal access to memory which has been freed.
> 
> Jian J Wang (3):
>   MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature
>   UefiCpuPkg/CpuDxe: fix an infinite loop issue
>   MdeModulePkg/Core: add use-after-free memory detection
> 
>  MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
>  MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
>  MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
>  MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
> ++++++++++++++++++++++++++-
>  MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
>  MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
>  MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
>  MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
>  MdeModulePkg/MdeModulePkg.dec                |   6 +
>  UefiCpuPkg/CpuDxe/CpuPageTable.c             |  12 +
>  10 files changed, 537 insertions(+), 26 deletions(-)
> 
> --
> 2.16.2.windows.1
> 
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature
  2018-10-19  1:50 ` [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature Jian J Wang
@ 2018-10-19 11:27   ` Laszlo Ersek
  2018-10-22  2:20   ` Zeng, Star
  1 sibling, 0 replies; 16+ messages in thread
From: Laszlo Ersek @ 2018-10-19 11:27 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni

On 10/19/18 03:50, Jian J Wang wrote:
> UAF (Use-After-Free) memory detection is new feature introduced to
> detect illegal access to memory which has been freed. The principle
> behind is similar to heap guard feature, that is we'll turn all pool
> memory allocation to page allocation and mark them to be not-present
> once they are freed.
> 
> This also implies that, once a page is allocated and freed, it cannot
> be re-allocated. This will bring another issue, which is that there's
> risk that memory space will be used out. To address it, this patch
> series add logic put part (at most 64 pages a time) of freed pages
> back into page pool, so that the memory service can still have memory
> to allocate, when all memory space have been allocated once. This is
> called memory promotion. The promoted pages are always from the eldest
> pages freed.
> 
> To use this feature, one can simply set following PCD to 1
>   gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> 
> Please note this feature cannot be used with heap guard feature controlled
> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  MdeModulePkg/MdeModulePkg.dec | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
> index 6037504fa7..83736cd761 100644
> --- a/MdeModulePkg/MdeModulePkg.dec
> +++ b/MdeModulePkg/MdeModulePkg.dec
> @@ -1029,6 +1029,12 @@
>    # @Prompt Enable UEFI Stack Guard.
>    gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard|FALSE|BOOLEAN|0x30001055
>  
> +  ## This mask is to control Use-After-Free Memory Detection behavior.
> +  #   BIT0    - Enable Use-After-Free memory detection for UEFI modules.<BR>
> +  #   BIT1..7 - Reserved for future uses.<BR>
> +  # @Prompt The Use-After-Free Memory Detection feature mask
> +  gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask|0x0|UINT8|0x30001056
> +
>  [PcdsFixedAtBuild, PcdsPatchableInModule]
>    ## Dynamic type PCD can be registered callback function for Pcd setting action.
>    #  PcdMaxPeiPcdCallBackNumberPerPcdEntry indicates the maximum number of callback function
> 

The default value looks fine to me; can't comment on the rest.

Acked-by: Laszlo Ersek <lersek@redhat.com>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue
  2018-10-19  1:50 ` [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
@ 2018-10-19 11:45   ` Laszlo Ersek
  2018-10-22  7:23     ` Wang, Jian J
  0 siblings, 1 reply; 16+ messages in thread
From: Laszlo Ersek @ 2018-10-19 11:45 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni

On 10/19/18 03:50, Jian J Wang wrote:
> The UAF (Use-After-Free) memory detection feature will cause an
> infinite calling of InitializePageTablePool(). This is due to a
> fact that AllocateAlignedPages() is used to allocate page table
> pool memory. This function will most likely call gBS->FreePages
> to free unaligned pages and then cause another round of page
> attributes change, like below
> 
>    FreePages() <===============|
> => SetMemoryAttributes()       |

This should likely be "SetMemorySpaceAttributes" (the DXE service), or else "CpuSetMemoryAttributes" (the underlying CpuDxe function name).

> => <out of page table>         |
> => InitializePageTablePool()   |
> => AllocateAlignedPages()      |
> => FreePages() ================|
> 
> The solution is add a lock in page table pool allocation function
> and fail any other requests if it has not been done.

OK, but what is the end result? InitializePageTablePool() will return FALSE. How far back up is that error propagated? To what components will the error be visible?

BTW, I've found the following comment in CpuSetMemoryAttributes():

  //
  // During memory attributes updating, new pages may be allocated to setup
  // smaller granularity of page table. Page allocation action might then cause
  // another calling of CpuSetMemoryAttributes() recursively, due to memory
  // protection policy configured (such as PcdDxeNxMemoryProtectionPolicy).
  // Since this driver will always protect memory used as page table by itself,
  // there's no need to apply protection policy requested from memory service.
  // So it's safe to just return EFI_SUCCESS if this time of calling is caused
  // by page table memory allocation.
  //

Is the current argument similar? I think it should be documented somehow.

> 
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  UefiCpuPkg/CpuDxe/CpuPageTable.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTable.c
> index 33e8ee2d2c..2145e623fa 100644
> --- a/UefiCpuPkg/CpuDxe/CpuPageTable.c
> +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c
> @@ -100,6 +100,7 @@ PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
>  };
>  
>  PAGE_TABLE_POOL                   *mPageTablePool = NULL;
> +EFI_LOCK                          mPageTablePoolLock = EFI_INITIALIZE_LOCK_VARIABLE (TPL_NOTIFY);

Why does this have to be an "EFI_LOCK"? Can't we just use a global variable? (I don't understand why messing with the TPL is necessary.)

In fact, I totally don't understand the point of EfiAcquireLock(). If we have two independent resources, each protected with its own separate lock, then why do both locks share the system-wide TPL between each other?


>  PAGE_TABLE_LIB_PAGING_CONTEXT     mPagingContext;
>  EFI_SMM_BASE2_PROTOCOL            *mSmmBase2 = NULL;
>  
> @@ -1045,6 +1046,12 @@ InitializePageTablePool (
>  {
>    VOID                      *Buffer;
>    BOOLEAN                   IsModified;
> +  EFI_STATUS                Status;
> +
> +  Status = EfiAcquireLockOrFail (&mPageTablePoolLock);
> +  if (EFI_ERROR (Status)) {
> +    return FALSE;
> +  }
>  
>    //
>    // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one page for
> @@ -1056,7 +1063,10 @@ InitializePageTablePool (
>    Buffer = AllocateAlignedPages (PoolPages, PAGE_TABLE_POOL_ALIGNMENT);
>    if (Buffer == NULL) {
>      DEBUG ((DEBUG_ERROR, "ERROR: Out of aligned pages\r\n"));
> +    EfiReleaseLock (&mPageTablePoolLock);

I feel that it would be safer to introduce a "Done" label at the bottom of the function, and release the lock there.

(Again, I'm not sure why this has to be a "lock".)

>      return FALSE;
> +  } else {
> +    DEBUG ((DEBUG_INFO, "Paging: added %d pages to page table pool\r\n", PoolPages));

Please don't print UINTN values with %d. Cast them to UINT64 and log them with %lu.

>    }
>  
>    //
> @@ -1092,6 +1102,8 @@ InitializePageTablePool (
>      );
>    ASSERT (IsModified == TRUE);
>  
> +  EfiReleaseLock (&mPageTablePoolLock);
> +
>    return TRUE;
>  }
>  
> 

Thanks
Laszlo


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
  2018-10-19  1:50 ` [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection Jian J Wang
@ 2018-10-19 12:04   ` Laszlo Ersek
  2018-10-22  7:34     ` Wang, Jian J
  2018-10-22  2:53   ` Zeng, Star
  1 sibling, 1 reply; 16+ messages in thread
From: Laszlo Ersek @ 2018-10-19 12:04 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni

On 10/19/18 03:50, Jian J Wang wrote:
> UAF (Use-After-Free) memory detection is new feature introduced to
> detect illegal access to memory which has been freed. The principle
> behind is similar to heap guard feature, that is the core turn all pool
> memory allocation to page allocation and mark them to be not-present
> once they are freed.
> 
> This also implies that, once a page is allocated and freed, it cannot
> be re-allocated. This will bring another issue, which is that there's
> risk that memory space will be used out. To address it, the memory
> service add logic to put part (at most 64 pages a time) of freed pages
> back into page pool, so that the memory service can still have memory
> to allocate, when all memory space have been allocated once. This is
> called memory promotion. The promoted pages are always from the eldest
> pages which haven been freed.
> 
> To use this feature, one can simply set following PCD to 1
>   gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> 
> Please note this feature cannot be used with heap guard feature controlled
> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
>  MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
>  MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
>  MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393 ++++++++++++++++++++++++++-
>  MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
>  MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
>  MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
>  MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
>  8 files changed, 519 insertions(+), 26 deletions(-)

I don't think I can review this patch well (I'm not an MdeModulePkg
reviewer). However I do think this patch is way too large. I suggest
splitting out everything that counts as "preparation" to separate patches.

For example:

> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h b/MdeModulePkg/Core/Dxe/DxeMain.h
> index 2dec9da5e3..ae75cc5b25 100644
> --- a/MdeModulePkg/Core/Dxe/DxeMain.h
> +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
> @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>  #include <Library/DxeServicesLib.h>
>  #include <Library/DebugAgentLib.h>
>  #include <Library/CpuExceptionHandlerLib.h>
> +#include <Library/DebugPrintErrorLevelLib.h>
>  
>  
>  //
> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf b/MdeModulePkg/Core/Dxe/DxeMain.inf
> index 10375443c0..d91258c087 100644
> --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
> +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
> @@ -198,6 +198,7 @@
>    gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType                       ## CONSUMES
>    gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask                   ## CONSUMES
>    gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ## CONSUMES
> +  gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask       ## CONSUMES
>  
>  # [Hob]
>  # RESOURCE_DESCRIPTOR   ## CONSUMES
> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> index d9c65a8045..e43ec74010 100644
> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
>      EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
>      UINTN                            Index;
>  
> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> +      return;
> +    }
> +
>      Status = CoreGetMemorySpaceMap (&NumberOfDescriptors, &MemorySpaceMap);
>      ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
>  

this change doesn't belong to the UAF feature introduction. This change
streamlines some debug code, which may or may not be useful, but either
way, it should be presented separately, and discussed separately.

> @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
>      EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
>      UINTN                        Index;
>  
> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> +      return;
> +    }
> +
>      Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
>      ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
>  
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> index 663f969c0d..b120c04f8f 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
>  GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
>                                      = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>  
> +//
> +// Used for Use-After-Free memory detection to promote freed but not used pages.
> +//
> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS mLastPromotedPage = BASE_4GB;
> +
>  /**
>    Set corresponding bits in bitmap table to 1 according to the address.
>  
> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
>  
>    @return An integer containing the guarded memory bitmap.
>  **/
> -UINTN
> +UINT64
>  GetGuardedMemoryBits (
>    IN EFI_PHYSICAL_ADDRESS    Address,
>    IN UINTN                   NumberOfPages
> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
>  {
>    UINT64            *BitMap;
>    UINTN             Bits;
> -  UINTN             Result;
> +  UINT64            Result;
>    UINTN             Shift;
>    UINTN             BitsToUnitEnd;
>  
> @@ -1203,6 +1208,373 @@ SetAllGuardPages (
>    }
>  }
>  
> +/**
> +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
> +
> +  @return TRUE/FALSE.
> +**/
> +BOOLEAN
> +IsUafEnabled (
> +  VOID
> +  )
> +{
> +  ASSERT (!IsHeapGuardEnabled());

This seems strange. Do you really mean to assert that the heap guard is
disabled *before* you check the UAF PCD setting?

... Indeed, I see some confusing conditions related to this, below:

> +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
> +}
> +
> +/**
> +  Find the address of top-most guarded free page.
> +
> +  @param[out]  Address    Start address of top-most guarded free page.
> +
> +  @return VOID.
> +**/
> +VOID
> +GetLastGuardedFreePageAddress (
> +  OUT EFI_PHYSICAL_ADDRESS      *Address
> +  )
> +{
> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> +  UINTN                   Level;
> +  UINT64                  Map;
> +  INTN                    Index;
> +
> +  ASSERT (mMapLevel >= 1);
> +
> +  BaseAddress = 0;
> +  Map = mGuardedMemoryMap;
> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> +       ++Level) {
> +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> +
> +    //
> +    // Find the non-NULL entry at largest index.
> +    //
> +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> +        Map = ((UINT64 *)(UINTN)Map)[Index];
> +        break;
> +      }
> +    }
> +  }
> +
> +  //
> +  // Find the non-zero MSB then get the page address.
> +  //
> +  while (Map != 0) {
> +    Map = RShiftU64 (Map, 1);
> +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> +  }
> +
> +  *Address = BaseAddress;
> +}
> +
> +/**
> +  Record freed pages.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +MarkFreedPages (
> +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> +  IN UINTN                    Pages
> +  )
> +{
> +  SetGuardedMemoryBits (BaseAddress, Pages);
> +}
> +
> +/**
> +  Record freed pages as well as mark them as not-present, if possible.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPages (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  )
> +{
> +  EFI_STATUS      Status;
> +
> +  //
> +  // Do nothing if
> +  // - Use-After-Free detection is not enabled
> +  // - Freed memory is legacy memory lower than 1MB
> +  //
> +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
> +    return;
> +  }

OK, so remember this: GuardFreedPages() calls IsUafEnabled().

> +
> +  MarkFreedPages (BaseAddress, Pages);
> +  if (gCpu != NULL) {
> +    //
> +    // Set flag to make sure allocating memory without GUARD for page table
> +    // operation; otherwise infinite loops could be caused.
> +    //
> +    mOnGuarding = TRUE;
> +    //
> +    // Note: This might overwrite other attributes needed by other features,
> +    // such as NX memory protection.
> +    //
> +    Status = gCpu->SetMemoryAttributes (
> +                     gCpu,
> +                     BaseAddress,
> +                     EFI_PAGES_TO_SIZE(Pages),
> +                     EFI_MEMORY_RP
> +                     );
> +    //
> +    // Normally we should check the returned Status. But there might be memory
> +    // alloc/free involved in SetMemoryAttributes(), which would fail this
> +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> +    //
> +    if (EFI_ERROR (Status)) {
> +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n", BaseAddress, Pages));
> +    }
> +    mOnGuarding = FALSE;
> +  }
> +}
> +
> +/**
> +  Mark all pages freed before CPU Arch Protocol as not-present.
> +
> +**/
> +VOID
> +GuardAllFreedPages (
> +  VOID
> +  )
> +{
> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    TableEntry;
> +  UINT64    Address;
> +  UINT64    GuardPage;
> +  INTN      Level;
> +  UINTN     BitIndex;
> +  UINTN     GuardPageNumber;
> +
> +  if (mGuardedMemoryMap == 0 ||
> +      mMapLevel == 0 ||
> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> +    return;
> +  }
> +
> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> +
> +  SetMem (Tables, sizeof(Tables), 0);
> +  SetMem (Addresses, sizeof(Addresses), 0);
> +  SetMem (Indices, sizeof(Indices), 0);
> +
> +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +  Tables[Level]   = mGuardedMemoryMap;
> +  Address         = 0;
> +  GuardPage       = (UINT64)-1;
> +  GuardPageNumber = 0;
> +
> +  while (TRUE) {
> +    if (Indices[Level] > Entries[Level]) {
> +      Tables[Level] = 0;
> +      Level        -= 1;
> +    } else {
> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> +      Address     = Addresses[Level];
> +
> +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> +        Level            += 1;
> +        Tables[Level]     = TableEntry;
> +        Addresses[Level]  = Address;
> +        Indices[Level]    = 0;
> +
> +        continue;
> +      } else {
> +        BitIndex = 1;
> +        while (BitIndex != 0) {
> +          if ((TableEntry & BitIndex) != 0) {
> +            if (GuardPage == (UINT64)-1) {
> +              GuardPage = Address;
> +            }
> +            ++GuardPageNumber;
> +          } else if (GuardPageNumber > 0) {
> +            GuardFreedPages (GuardPage, GuardPageNumber);

Then GuardAllFreedPages() calls GuardFreedPages().

> +            GuardPageNumber = 0;
> +            GuardPage       = (UINT64)-1;
> +          }
> +
> +          if (TableEntry == 0) {
> +            break;
> +          }
> +
> +          Address += EFI_PAGES_TO_SIZE (1);
> +          BitIndex = LShiftU64(BitIndex, 1);
> +        }
> +      }
> +    }
> +
> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> +      break;
> +    }
> +
> +    Indices[Level] += 1;
> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> +
> +  }
> +
> +  //
> +  // Update the maximum address of freed page which can be used for memory
> +  // promotion upon out-of-memory-space.
> +  //
> +  GetLastGuardedFreePageAddress (&Address);
> +  if (Address != 0) {
> +    mLastPromotedPage = Address;
> +  }
> +}
> +
> +/**
> +  This function checks to see if the given memory map descriptor in a memory map
> +  can be merged with any guarded free pages.
> +
> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> +  @param  MaxAddress        Maximum address to stop the merge.
> +
> +  @return VOID
> +
> +**/
> +VOID
> +MergeGuardPages (
> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> +  )
> +{
> +  EFI_PHYSICAL_ADDRESS        EndAddress;
> +  UINT64                      Bitmap;
> +  INTN                        Pages;
> +
> +  if (!IsUafEnabled () ||
> +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> +    return;
> +  }
> +
> +  Bitmap = 0;
> +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry->PhysicalStart);
> +  Pages -= MemoryMapEntry->NumberOfPages;
> +  while (Pages > 0) {
> +    if (Bitmap == 0) {
> +      EndAddress = MemoryMapEntry->PhysicalStart +
> +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> +      Bitmap = GetGuardedMemoryBits(EndAddress, GUARDED_HEAP_MAP_ENTRY_BITS);
> +    }
> +
> +    if ((Bitmap & 1) == 0) {
> +      break;
> +    }
> +
> +    Pages--;
> +    MemoryMapEntry->NumberOfPages++;
> +    Bitmap = RShiftU64 (Bitmap, 1);
> +  }
> +}
> +
> +/**
> +  Put part (at most 64 pages a time) guarded free pages back to free page pool.
> +
> +  Use-After-Free detection makes use of 'Used then throw away' way to detect
> +  any illegal access to freed memory. The thrown-away memory will be marked as
> +  not-present so that any access to those memory (after free) will trigger
> +  page-fault.
> +
> +  The problem is that this will consume lots of memory space. Once no memory
> +  left in pool to allocate, we have to restore part of the freed pages to their
> +  normal function. Otherwise the whole system will stop functioning.
> +
> +  @param  StartAddress    Start address of promoted memory.
> +  @param  EndAddress      End address of promoted memory.
> +
> +  @return TRUE    Succeeded to promote memory.
> +  @return FALSE   No free memory found.
> +
> +**/
> +BOOLEAN
> +PromoteGuardedFreePages (
> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> +  )
> +{
> +  EFI_STATUS              Status;
> +  UINTN                   AvailablePages;
> +  UINT64                  Bitmap;
> +  EFI_PHYSICAL_ADDRESS    Start;
> +
> +  if (!IsUafEnabled ()) {
> +    return FALSE;
> +  }
> +
> +  //
> +  // Similar to memory allocation service, always search the freed pages in
> +  // descending direction.
> +  //
> +  Start           = mLastPromotedPage;
> +  AvailablePages  = 0;
> +  while (AvailablePages == 0) {
> +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> +    //
> +    // If the address wraps around, try the really freed pages at top.
> +    //
> +    if (Start > mLastPromotedPage) {
> +      GetLastGuardedFreePageAddress (&Start);
> +      ASSERT (Start != 0);
> +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> +    }
> +
> +    Bitmap = GetGuardedMemoryBits(Start, GUARDED_HEAP_MAP_ENTRY_BITS);
> +    while (Bitmap > 0) {
> +      if ((Bitmap & 1) != 0) {
> +        ++AvailablePages;
> +      } else if (AvailablePages == 0) {
> +        Start += EFI_PAGES_TO_SIZE (1);
> +      } else {
> +        break;
> +      }
> +
> +      Bitmap = RShiftU64(Bitmap, 1);
> +    }
> +  }
> +
> +  if (AvailablePages) {
> +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start, AvailablePages));
> +    if (gCpu != NULL) {
> +      //
> +      // Set flag to make sure allocating memory without GUARD for page table
> +      // operation; otherwise infinite loops could be caused.
> +      //
> +      mOnGuarding = TRUE;
> +      Status = gCpu->SetMemoryAttributes (gCpu, Start, EFI_PAGES_TO_SIZE(AvailablePages), 0);
> +      ASSERT_EFI_ERROR (Status);
> +      mOnGuarding = FALSE;
> +    }
> +
> +    mLastPromotedPage = Start;
> +    *StartAddress     = Start;
> +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> +    return TRUE;
> +  }
> +
> +  return FALSE;
> +}
> +
>  /**
>    Notify function used to set all Guard pages before CPU Arch Protocol installed.
>  **/
> @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
>    )
>  {
>    ASSERT (gCpu != NULL);
> -  SetAllGuardPages ();
> +
> +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
> +    SetAllGuardPages ();
> +  }
> +
> +  if (IsUafEnabled ()) {
> +    GuardAllFreedPages ();

But GuardAllFreedPages() is only called dependent on IsUafEnabled().

This means that the internal IsUafEnabled() call, in GuardFreedPages(),
is superfluous -- at least when GuardFreedPages() is called from
GuardAllFreedPages().

There are other places that call GuardFreedPages() too, and from those
call sites, the internal IsUafEnabled() does seem necessary.

This is very confusing when someone would like to confirm that this
patch "does nothing" when the PCD is clear.

I suggest the following improvement:

- Make IsUafEnabled() a robust checker function. It should be possible
to call from any context, and it should only trip an assert (regarding
the heap guard) when the UAF detection is in fact enabled.

- Introduce two variants of GuardFreedPages(). The first variant should
*assume* that UAF detection is enabled. GuardAllFreedPages() should call
this 1st variant. The 2nd variant should check UAF explicitly, and then
call the 1st variant.

Basically, when I read through this patch, I should be able to mentally
replace some function calls with constant FALSE.

Thanks
Laszlo



> +  }
>  }
>  
>  /**
> @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
>    CHAR8     *Ruler1;
>    CHAR8     *Ruler2;
>  
> +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
> +    return;
> +  }
> +
> +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
> +    return;
> +  }
> +
>    if (mGuardedMemoryMap == 0 ||
>        mMapLevel == 0 ||
>        mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> index 8c34692439..3f2a5283c2 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
>    VOID
>    );
>  
> +/**
> +  This function checks to see if the given memory map descriptor in a memory map
> +  can be merged with any guarded free pages.
> +
> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> +  @param  MaxAddress        Maximum address to stop the merge.
> +
> +  @return VOID
> +
> +**/
> +VOID
> +MergeGuardPages (
> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> +  );
> +
> +/**
> +  Record freed pages as well as mark them as not-present, if possible.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPages (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  );
> +
> +/**
> +  Check to see if the use-after-free (UAF) feature is enabled or not.
> +
> +  @return TRUE/FALSE.
> +**/
> +BOOLEAN
> +IsUafEnabled (
> +  VOID
> +  );
> +
> +/**
> +  Put part (at most 64 pages a time) guarded free pages back to free page pool.
> +
> +  Use-After-Free detection makes use of 'Used then throw away' way to detect
> +  any illegal access to freed memory. The thrown-away memory will be marked as
> +  not-present so that any access to those memory (after free) will trigger
> +  page-fault.
> +
> +  The problem is that this will consume lots of memory space. Once no memory
> +  left in pool to allocate, we have to restore part of the freed pages to their
> +  normal function. Otherwise the whole system will stop functioning.
> +
> +  @param  StartAddress    Start address of promoted memory.
> +  @param  EndAddress      End address of promoted memory.
> +
> +  @return TRUE    Succeeded to promote memory.
> +  @return FALSE   No free memory found.
> +
> +**/
> +BOOLEAN
> +PromoteGuardedFreePages (
> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> +  );
> +
>  extern BOOLEAN mOnGuarding;
>  
>  #endif
> diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c
> index 3b4cc08e7c..9f4419fa5b 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> @@ -401,9 +401,11 @@ PromoteMemoryResource (
>    VOID
>    )
>  {
> -  LIST_ENTRY         *Link;
> -  EFI_GCD_MAP_ENTRY  *Entry;
> -  BOOLEAN            Promoted;
> +  LIST_ENTRY            *Link;
> +  EFI_GCD_MAP_ENTRY     *Entry;
> +  BOOLEAN               Promoted;
> +  EFI_PHYSICAL_ADDRESS  StartAddress;
> +  EFI_PHYSICAL_ADDRESS  EndAddress;
>  
>    DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
>  
> @@ -451,6 +453,24 @@ PromoteMemoryResource (
>  
>    CoreReleaseGcdMemoryLock ();
>  
> +  if (!Promoted) {
> +    //
> +    // If Use-After-Free detection is enabled, we might promote pages from
> +    // guarded free pages.
> +    //
> +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> +    if (Promoted) {
> +      CoreAcquireGcdMemoryLock ();
> +      CoreAddRange (
> +        EfiConventionalMemory,
> +        StartAddress,
> +        EndAddress,
> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT | EFI_MEMORY_WB
> +        );
> +      CoreReleaseGcdMemoryLock ();
> +    }
> +  }
> +
>    return Promoted;
>  }
>  /**
> @@ -896,9 +916,13 @@ CoreConvertPagesEx (
>      }
>  
>      //
> -    // Add our new range in
> +    // Add our new range in. Don't do this for freed pages if Use-After-Free
> +    // detection is enabled.
>      //
> -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> +    if (!IsUafEnabled () || !ChangingType || MemType != EfiConventionalMemory) {
> +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> +    }
> +
>      if (ChangingType && (MemType == EfiConventionalMemory)) {
>        //
>        // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
> @@ -1514,6 +1538,7 @@ CoreFreePages (
>  
>    Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
>    if (!EFI_ERROR (Status)) {
> +    GuardFreedPages (Memory, NumberOfPages);
>      CoreUpdateProfile (
>        (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
>        MemoryProfileActionFreePages,
> @@ -1908,9 +1933,7 @@ Done:
>    *MemoryMapSize = BufferSize;
>  
>    DEBUG_CODE (
> -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> -      DumpGuardedMemoryBitmap ();
> -    }
> +    DumpGuardedMemoryBitmap ();
>    );
>  
>    return Status;
> diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> index 1ff2061f7f..0cd74267b3 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> @@ -26,7 +26,8 @@ typedef struct {
>  } POOL_FREE;
>  
>  
> -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
>  typedef struct {
>    UINT32          Signature;
>    UINT32          Reserved;
> @@ -367,6 +368,7 @@ CoreAllocatePoolI (
>    UINTN       NoPages;
>    UINTN       Granularity;
>    BOOLEAN     HasPoolTail;
> +  BOOLEAN     PageAsPool;
>  
>    ASSERT_LOCKED (&mPoolMemoryLock);
>  
> @@ -386,6 +388,7 @@ CoreAllocatePoolI (
>  
>    HasPoolTail  = !(NeedGuard &&
>                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
>  
>    //
>    // Adjusting the Size to be of proper alignment so that
> @@ -406,7 +409,7 @@ CoreAllocatePoolI (
>    // If allocation is over max size, just allocate pages for the request
>    // (slow)
>    //
> -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
>      if (!HasPoolTail) {
>        Size -= sizeof (POOL_TAIL);
>      }
> @@ -498,7 +501,7 @@ Done:
>      //
>      // If we have a pool buffer, fill in the header & tail info
>      //
> -    Head->Signature = POOL_HEAD_SIGNATURE;
> +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE : POOL_HEAD_SIGNATURE;
>      Head->Size      = Size;
>      Head->Type      = (EFI_MEMORY_TYPE) PoolType;
>      Buffer          = Head->Data;
> @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
>    CoreFreePoolPages (Memory, NoPages);
>    CoreReleaseMemoryLock ();
>  
> +  GuardFreedPages (Memory, NoPages);
>    ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
>      (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
>  }
> @@ -685,15 +689,19 @@ CoreFreePoolI (
>    UINTN       Granularity;
>    BOOLEAN     IsGuarded;
>    BOOLEAN     HasPoolTail;
> +  BOOLEAN     PageAsPool;
>  
>    ASSERT(Buffer != NULL);
>    //
>    // Get the head & tail of the pool entry
>    //
> -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
>    ASSERT(Head != NULL);
>  
> -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>      return EFI_INVALID_PARAMETER;
>    }
>  
> @@ -701,6 +709,7 @@ CoreFreePoolI (
>                  IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
>    HasPoolTail = !(IsGuarded &&
>                    ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>  
>    if (HasPoolTail) {
>      Tail = HEAD_TO_TAIL (Head);
> @@ -757,7 +766,7 @@ CoreFreePoolI (
>    //
>    // If it's not on the list, it must be pool pages
>    //
> -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
>  
>      //
>      // Return the memory pages back to free memory
> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> index 05eb4f422b..f6595c90ed 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>  #include <Guid/PropertiesTable.h>
>  
>  #include "DxeMain.h"
> +#include "HeapGuard.h"
>  
>  #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
>    ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> @@ -205,16 +206,13 @@ MergeMemoryMap (
>      NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
>  
>      do {
> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry->NumberOfPages));
> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry->PhysicalStart);
> +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry->NumberOfPages));
>        if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        if (NewMemoryMapEntry != MemoryMapEntry) {
> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        }
> -
> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
>          NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (NextMemoryMapEntry, DescriptorSize);
>          continue;
>        } else {
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature
  2018-10-19  1:50 ` [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature Jian J Wang
  2018-10-19 11:27   ` Laszlo Ersek
@ 2018-10-22  2:20   ` Zeng, Star
  1 sibling, 0 replies; 16+ messages in thread
From: Zeng, Star @ 2018-10-22  2:20 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel
  Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Laszlo Ersek, star.zeng

On 2018/10/19 9:50, Jian J Wang wrote:
> UAF (Use-After-Free) memory detection is new feature introduced to
> detect illegal access to memory which has been freed. The principle
> behind is similar to heap guard feature, that is we'll turn all pool
> memory allocation to page allocation and mark them to be not-present
> once they are freed.
> 
> This also implies that, once a page is allocated and freed, it cannot
> be re-allocated. This will bring another issue, which is that there's
> risk that memory space will be used out. To address it, this patch
> series add logic put part (at most 64 pages a time) of freed pages
> back into page pool, so that the memory service can still have memory
> to allocate, when all memory space have been allocated once. This is
> called memory promotion. The promoted pages are always from the eldest
> pages freed.
> 
> To use this feature, one can simply set following PCD to 1
>    gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> 
> Please note this feature cannot be used with heap guard feature controlled
> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.

How about adding this sentence into the description of 
PcdUseAfterFreeDetectionPropertyMask, and correspondingly adding some 
information in the description of PcdHeapGuardPropertyMask?

> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>   MdeModulePkg/MdeModulePkg.dec | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
> index 6037504fa7..83736cd761 100644
> --- a/MdeModulePkg/MdeModulePkg.dec
> +++ b/MdeModulePkg/MdeModulePkg.dec
> @@ -1029,6 +1029,12 @@
>     # @Prompt Enable UEFI Stack Guard.
>     gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard|FALSE|BOOLEAN|0x30001055
>   
> +  ## This mask is to control Use-After-Free Memory Detection behavior.
> +  #   BIT0    - Enable Use-After-Free memory detection for UEFI modules.<BR>
> +  #   BIT1..7 - Reserved for future uses.<BR>
> +  # @Prompt The Use-After-Free Memory Detection feature mask
> +  gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask|0x0|UINT8|0x30001056

Remember to update MdeModulePkg.uni for it.

Thanks,
Star

> +
>   [PcdsFixedAtBuild, PcdsPatchableInModule]
>     ## Dynamic type PCD can be registered callback function for Pcd setting action.
>     #  PcdMaxPeiPcdCallBackNumberPerPcdEntry indicates the maximum number of callback function
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
  2018-10-19  1:50 ` [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection Jian J Wang
  2018-10-19 12:04   ` Laszlo Ersek
@ 2018-10-22  2:53   ` Zeng, Star
  2018-10-22  7:12     ` Wang, Jian J
  1 sibling, 1 reply; 16+ messages in thread
From: Zeng, Star @ 2018-10-22  2:53 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel
  Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Laszlo Ersek, star.zeng

On 2018/10/19 9:50, Jian J Wang wrote:
> UAF (Use-After-Free) memory detection is new feature introduced to
> detect illegal access to memory which has been freed. The principle
> behind is similar to heap guard feature, that is the core turn all pool
> memory allocation to page allocation and mark them to be not-present
> once they are freed.
> 
> This also implies that, once a page is allocated and freed, it cannot
> be re-allocated. This will bring another issue, which is that there's
> risk that memory space will be used out. To address it, the memory
> service add logic to put part (at most 64 pages a time) of freed pages
> back into page pool, so that the memory service can still have memory
> to allocate, when all memory space have been allocated once. This is
> called memory promotion. The promoted pages are always from the eldest
> pages which haven been freed.
> 
> To use this feature, one can simply set following PCD to 1
>    gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> 
> Please note this feature cannot be used with heap guard feature controlled
> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>

I did not review the whole patch thoroughly. But I suggest that putting 
the code for UAF in a separated file if it is feasible. I am aware that 
UAF will be sharing much code with HeapGuard, so could we have Uaf.c, 
HeapGuard.c and GuardPage.c (this file name is just for example)?

> ---
>   MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
>   MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
>   MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393 ++++++++++++++++++++++++++-
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
>   MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
>   MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
>   MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
>   8 files changed, 519 insertions(+), 26 deletions(-)
> 
> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h b/MdeModulePkg/Core/Dxe/DxeMain.h
> index 2dec9da5e3..ae75cc5b25 100644
> --- a/MdeModulePkg/Core/Dxe/DxeMain.h
> +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
> @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>   #include <Library/DxeServicesLib.h>
>   #include <Library/DebugAgentLib.h>
>   #include <Library/CpuExceptionHandlerLib.h>
> +#include <Library/DebugPrintErrorLevelLib.h>
>   
>   
>   //
> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf b/MdeModulePkg/Core/Dxe/DxeMain.inf
> index 10375443c0..d91258c087 100644
> --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
> +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
> @@ -198,6 +198,7 @@
>     gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType                       ## CONSUMES
>     gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask                   ## CONSUMES
>     gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ## CONSUMES
> +  gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask       ## CONSUMES
>   
>   # [Hob]
>   # RESOURCE_DESCRIPTOR   ## CONSUMES
> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> index d9c65a8045..e43ec74010 100644
> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
>       EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
>       UINTN                            Index;
>   
> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> +      return;
> +    }

I am aware the purpose of this change to optimize some code when 
DEBUG_GCD is not set.
But I do not suggest we newly introduce the dependency to 
DebugPrintErrorLevelLib, I think this check should be hidden in the 
instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL (this 
macro name is just for example) can be introduced if we really want to 
do that.


Thanks,
Star

> +
>       Status = CoreGetMemorySpaceMap (&NumberOfDescriptors, &MemorySpaceMap);
>       ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
>   
> @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
>       EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
>       UINTN                        Index;
>   
> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> +      return;
> +    }
> +
>       Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
>       ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
>   
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> index 663f969c0d..b120c04f8f 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
>   GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
>                                       = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>   
> +//
> +// Used for Use-After-Free memory detection to promote freed but not used pages.
> +//
> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS mLastPromotedPage = BASE_4GB;
> +
>   /**
>     Set corresponding bits in bitmap table to 1 according to the address.
>   
> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
>   
>     @return An integer containing the guarded memory bitmap.
>   **/
> -UINTN
> +UINT64
>   GetGuardedMemoryBits (
>     IN EFI_PHYSICAL_ADDRESS    Address,
>     IN UINTN                   NumberOfPages
> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
>   {
>     UINT64            *BitMap;
>     UINTN             Bits;
> -  UINTN             Result;
> +  UINT64            Result;
>     UINTN             Shift;
>     UINTN             BitsToUnitEnd;
>   
> @@ -1203,6 +1208,373 @@ SetAllGuardPages (
>     }
>   }
>   
> +/**
> +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
> +
> +  @return TRUE/FALSE.
> +**/
> +BOOLEAN
> +IsUafEnabled (
> +  VOID
> +  )
> +{
> +  ASSERT (!IsHeapGuardEnabled());
> +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
> +}
> +
> +/**
> +  Find the address of top-most guarded free page.
> +
> +  @param[out]  Address    Start address of top-most guarded free page.
> +
> +  @return VOID.
> +**/
> +VOID
> +GetLastGuardedFreePageAddress (
> +  OUT EFI_PHYSICAL_ADDRESS      *Address
> +  )
> +{
> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> +  UINTN                   Level;
> +  UINT64                  Map;
> +  INTN                    Index;
> +
> +  ASSERT (mMapLevel >= 1);
> +
> +  BaseAddress = 0;
> +  Map = mGuardedMemoryMap;
> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> +       ++Level) {
> +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> +
> +    //
> +    // Find the non-NULL entry at largest index.
> +    //
> +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> +        Map = ((UINT64 *)(UINTN)Map)[Index];
> +        break;
> +      }
> +    }
> +  }
> +
> +  //
> +  // Find the non-zero MSB then get the page address.
> +  //
> +  while (Map != 0) {
> +    Map = RShiftU64 (Map, 1);
> +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> +  }
> +
> +  *Address = BaseAddress;
> +}
> +
> +/**
> +  Record freed pages.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +MarkFreedPages (
> +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> +  IN UINTN                    Pages
> +  )
> +{
> +  SetGuardedMemoryBits (BaseAddress, Pages);
> +}
> +
> +/**
> +  Record freed pages as well as mark them as not-present, if possible.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPages (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  )
> +{
> +  EFI_STATUS      Status;
> +
> +  //
> +  // Do nothing if
> +  // - Use-After-Free detection is not enabled
> +  // - Freed memory is legacy memory lower than 1MB
> +  //
> +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
> +    return;
> +  }
> +
> +  MarkFreedPages (BaseAddress, Pages);
> +  if (gCpu != NULL) {
> +    //
> +    // Set flag to make sure allocating memory without GUARD for page table
> +    // operation; otherwise infinite loops could be caused.
> +    //
> +    mOnGuarding = TRUE;
> +    //
> +    // Note: This might overwrite other attributes needed by other features,
> +    // such as NX memory protection.
> +    //
> +    Status = gCpu->SetMemoryAttributes (
> +                     gCpu,
> +                     BaseAddress,
> +                     EFI_PAGES_TO_SIZE(Pages),
> +                     EFI_MEMORY_RP
> +                     );
> +    //
> +    // Normally we should check the returned Status. But there might be memory
> +    // alloc/free involved in SetMemoryAttributes(), which would fail this
> +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> +    //
> +    if (EFI_ERROR (Status)) {
> +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n", BaseAddress, Pages));
> +    }
> +    mOnGuarding = FALSE;
> +  }
> +}
> +
> +/**
> +  Mark all pages freed before CPU Arch Protocol as not-present.
> +
> +**/
> +VOID
> +GuardAllFreedPages (
> +  VOID
> +  )
> +{
> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    TableEntry;
> +  UINT64    Address;
> +  UINT64    GuardPage;
> +  INTN      Level;
> +  UINTN     BitIndex;
> +  UINTN     GuardPageNumber;
> +
> +  if (mGuardedMemoryMap == 0 ||
> +      mMapLevel == 0 ||
> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> +    return;
> +  }
> +
> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> +
> +  SetMem (Tables, sizeof(Tables), 0);
> +  SetMem (Addresses, sizeof(Addresses), 0);
> +  SetMem (Indices, sizeof(Indices), 0);
> +
> +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +  Tables[Level]   = mGuardedMemoryMap;
> +  Address         = 0;
> +  GuardPage       = (UINT64)-1;
> +  GuardPageNumber = 0;
> +
> +  while (TRUE) {
> +    if (Indices[Level] > Entries[Level]) {
> +      Tables[Level] = 0;
> +      Level        -= 1;
> +    } else {
> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> +      Address     = Addresses[Level];
> +
> +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> +        Level            += 1;
> +        Tables[Level]     = TableEntry;
> +        Addresses[Level]  = Address;
> +        Indices[Level]    = 0;
> +
> +        continue;
> +      } else {
> +        BitIndex = 1;
> +        while (BitIndex != 0) {
> +          if ((TableEntry & BitIndex) != 0) {
> +            if (GuardPage == (UINT64)-1) {
> +              GuardPage = Address;
> +            }
> +            ++GuardPageNumber;
> +          } else if (GuardPageNumber > 0) {
> +            GuardFreedPages (GuardPage, GuardPageNumber);
> +            GuardPageNumber = 0;
> +            GuardPage       = (UINT64)-1;
> +          }
> +
> +          if (TableEntry == 0) {
> +            break;
> +          }
> +
> +          Address += EFI_PAGES_TO_SIZE (1);
> +          BitIndex = LShiftU64(BitIndex, 1);
> +        }
> +      }
> +    }
> +
> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> +      break;
> +    }
> +
> +    Indices[Level] += 1;
> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> +
> +  }
> +
> +  //
> +  // Update the maximum address of freed page which can be used for memory
> +  // promotion upon out-of-memory-space.
> +  //
> +  GetLastGuardedFreePageAddress (&Address);
> +  if (Address != 0) {
> +    mLastPromotedPage = Address;
> +  }
> +}
> +
> +/**
> +  This function checks to see if the given memory map descriptor in a memory map
> +  can be merged with any guarded free pages.
> +
> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> +  @param  MaxAddress        Maximum address to stop the merge.
> +
> +  @return VOID
> +
> +**/
> +VOID
> +MergeGuardPages (
> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> +  )
> +{
> +  EFI_PHYSICAL_ADDRESS        EndAddress;
> +  UINT64                      Bitmap;
> +  INTN                        Pages;
> +
> +  if (!IsUafEnabled () ||
> +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> +    return;
> +  }
> +
> +  Bitmap = 0;
> +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry->PhysicalStart);
> +  Pages -= MemoryMapEntry->NumberOfPages;
> +  while (Pages > 0) {
> +    if (Bitmap == 0) {
> +      EndAddress = MemoryMapEntry->PhysicalStart +
> +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> +      Bitmap = GetGuardedMemoryBits(EndAddress, GUARDED_HEAP_MAP_ENTRY_BITS);
> +    }
> +
> +    if ((Bitmap & 1) == 0) {
> +      break;
> +    }
> +
> +    Pages--;
> +    MemoryMapEntry->NumberOfPages++;
> +    Bitmap = RShiftU64 (Bitmap, 1);
> +  }
> +}
> +
> +/**
> +  Put part (at most 64 pages a time) guarded free pages back to free page pool.
> +
> +  Use-After-Free detection makes use of 'Used then throw away' way to detect
> +  any illegal access to freed memory. The thrown-away memory will be marked as
> +  not-present so that any access to those memory (after free) will trigger
> +  page-fault.
> +
> +  The problem is that this will consume lots of memory space. Once no memory
> +  left in pool to allocate, we have to restore part of the freed pages to their
> +  normal function. Otherwise the whole system will stop functioning.
> +
> +  @param  StartAddress    Start address of promoted memory.
> +  @param  EndAddress      End address of promoted memory.
> +
> +  @return TRUE    Succeeded to promote memory.
> +  @return FALSE   No free memory found.
> +
> +**/
> +BOOLEAN
> +PromoteGuardedFreePages (
> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> +  )
> +{
> +  EFI_STATUS              Status;
> +  UINTN                   AvailablePages;
> +  UINT64                  Bitmap;
> +  EFI_PHYSICAL_ADDRESS    Start;
> +
> +  if (!IsUafEnabled ()) {
> +    return FALSE;
> +  }
> +
> +  //
> +  // Similar to memory allocation service, always search the freed pages in
> +  // descending direction.
> +  //
> +  Start           = mLastPromotedPage;
> +  AvailablePages  = 0;
> +  while (AvailablePages == 0) {
> +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> +    //
> +    // If the address wraps around, try the really freed pages at top.
> +    //
> +    if (Start > mLastPromotedPage) {
> +      GetLastGuardedFreePageAddress (&Start);
> +      ASSERT (Start != 0);
> +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> +    }
> +
> +    Bitmap = GetGuardedMemoryBits(Start, GUARDED_HEAP_MAP_ENTRY_BITS);
> +    while (Bitmap > 0) {
> +      if ((Bitmap & 1) != 0) {
> +        ++AvailablePages;
> +      } else if (AvailablePages == 0) {
> +        Start += EFI_PAGES_TO_SIZE (1);
> +      } else {
> +        break;
> +      }
> +
> +      Bitmap = RShiftU64(Bitmap, 1);
> +    }
> +  }
> +
> +  if (AvailablePages) {
> +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start, AvailablePages));
> +    if (gCpu != NULL) {
> +      //
> +      // Set flag to make sure allocating memory without GUARD for page table
> +      // operation; otherwise infinite loops could be caused.
> +      //
> +      mOnGuarding = TRUE;
> +      Status = gCpu->SetMemoryAttributes (gCpu, Start, EFI_PAGES_TO_SIZE(AvailablePages), 0);
> +      ASSERT_EFI_ERROR (Status);
> +      mOnGuarding = FALSE;
> +    }
> +
> +    mLastPromotedPage = Start;
> +    *StartAddress     = Start;
> +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> +    return TRUE;
> +  }
> +
> +  return FALSE;
> +}
> +
>   /**
>     Notify function used to set all Guard pages before CPU Arch Protocol installed.
>   **/
> @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
>     )
>   {
>     ASSERT (gCpu != NULL);
> -  SetAllGuardPages ();
> +
> +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
> +    SetAllGuardPages ();
> +  }
> +
> +  if (IsUafEnabled ()) {
> +    GuardAllFreedPages ();
> +  }
>   }
>   
>   /**
> @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
>     CHAR8     *Ruler1;
>     CHAR8     *Ruler2;
>   
> +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
> +    return;
> +  }
> +
> +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
> +    return;
> +  }
> +
>     if (mGuardedMemoryMap == 0 ||
>         mMapLevel == 0 ||
>         mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> index 8c34692439..3f2a5283c2 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
>     VOID
>     );
>   
> +/**
> +  This function checks to see if the given memory map descriptor in a memory map
> +  can be merged with any guarded free pages.
> +
> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> +  @param  MaxAddress        Maximum address to stop the merge.
> +
> +  @return VOID
> +
> +**/
> +VOID
> +MergeGuardPages (
> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> +  );
> +
> +/**
> +  Record freed pages as well as mark them as not-present, if possible.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPages (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  );
> +
> +/**
> +  Check to see if the use-after-free (UAF) feature is enabled or not.
> +
> +  @return TRUE/FALSE.
> +**/
> +BOOLEAN
> +IsUafEnabled (
> +  VOID
> +  );
> +
> +/**
> +  Put part (at most 64 pages a time) guarded free pages back to free page pool.
> +
> +  Use-After-Free detection makes use of 'Used then throw away' way to detect
> +  any illegal access to freed memory. The thrown-away memory will be marked as
> +  not-present so that any access to those memory (after free) will trigger
> +  page-fault.
> +
> +  The problem is that this will consume lots of memory space. Once no memory
> +  left in pool to allocate, we have to restore part of the freed pages to their
> +  normal function. Otherwise the whole system will stop functioning.
> +
> +  @param  StartAddress    Start address of promoted memory.
> +  @param  EndAddress      End address of promoted memory.
> +
> +  @return TRUE    Succeeded to promote memory.
> +  @return FALSE   No free memory found.
> +
> +**/
> +BOOLEAN
> +PromoteGuardedFreePages (
> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> +  );
> +
>   extern BOOLEAN mOnGuarding;
>   
>   #endif
> diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c
> index 3b4cc08e7c..9f4419fa5b 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> @@ -401,9 +401,11 @@ PromoteMemoryResource (
>     VOID
>     )
>   {
> -  LIST_ENTRY         *Link;
> -  EFI_GCD_MAP_ENTRY  *Entry;
> -  BOOLEAN            Promoted;
> +  LIST_ENTRY            *Link;
> +  EFI_GCD_MAP_ENTRY     *Entry;
> +  BOOLEAN               Promoted;
> +  EFI_PHYSICAL_ADDRESS  StartAddress;
> +  EFI_PHYSICAL_ADDRESS  EndAddress;
>   
>     DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
>   
> @@ -451,6 +453,24 @@ PromoteMemoryResource (
>   
>     CoreReleaseGcdMemoryLock ();
>   
> +  if (!Promoted) {
> +    //
> +    // If Use-After-Free detection is enabled, we might promote pages from
> +    // guarded free pages.
> +    //
> +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> +    if (Promoted) {
> +      CoreAcquireGcdMemoryLock ();
> +      CoreAddRange (
> +        EfiConventionalMemory,
> +        StartAddress,
> +        EndAddress,
> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT | EFI_MEMORY_WB
> +        );
> +      CoreReleaseGcdMemoryLock ();
> +    }
> +  }
> +
>     return Promoted;
>   }
>   /**
> @@ -896,9 +916,13 @@ CoreConvertPagesEx (
>       }
>   
>       //
> -    // Add our new range in
> +    // Add our new range in. Don't do this for freed pages if Use-After-Free
> +    // detection is enabled.
>       //
> -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> +    if (!IsUafEnabled () || !ChangingType || MemType != EfiConventionalMemory) {
> +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> +    }
> +
>       if (ChangingType && (MemType == EfiConventionalMemory)) {
>         //
>         // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
> @@ -1514,6 +1538,7 @@ CoreFreePages (
>   
>     Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
>     if (!EFI_ERROR (Status)) {
> +    GuardFreedPages (Memory, NumberOfPages);
>       CoreUpdateProfile (
>         (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
>         MemoryProfileActionFreePages,
> @@ -1908,9 +1933,7 @@ Done:
>     *MemoryMapSize = BufferSize;
>   
>     DEBUG_CODE (
> -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> -      DumpGuardedMemoryBitmap ();
> -    }
> +    DumpGuardedMemoryBitmap ();
>     );
>   
>     return Status;
> diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> index 1ff2061f7f..0cd74267b3 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> @@ -26,7 +26,8 @@ typedef struct {
>   } POOL_FREE;
>   
>   
> -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
>   typedef struct {
>     UINT32          Signature;
>     UINT32          Reserved;
> @@ -367,6 +368,7 @@ CoreAllocatePoolI (
>     UINTN       NoPages;
>     UINTN       Granularity;
>     BOOLEAN     HasPoolTail;
> +  BOOLEAN     PageAsPool;
>   
>     ASSERT_LOCKED (&mPoolMemoryLock);
>   
> @@ -386,6 +388,7 @@ CoreAllocatePoolI (
>   
>     HasPoolTail  = !(NeedGuard &&
>                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
>   
>     //
>     // Adjusting the Size to be of proper alignment so that
> @@ -406,7 +409,7 @@ CoreAllocatePoolI (
>     // If allocation is over max size, just allocate pages for the request
>     // (slow)
>     //
> -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
>       if (!HasPoolTail) {
>         Size -= sizeof (POOL_TAIL);
>       }
> @@ -498,7 +501,7 @@ Done:
>       //
>       // If we have a pool buffer, fill in the header & tail info
>       //
> -    Head->Signature = POOL_HEAD_SIGNATURE;
> +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE : POOL_HEAD_SIGNATURE;
>       Head->Size      = Size;
>       Head->Type      = (EFI_MEMORY_TYPE) PoolType;
>       Buffer          = Head->Data;
> @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
>     CoreFreePoolPages (Memory, NoPages);
>     CoreReleaseMemoryLock ();
>   
> +  GuardFreedPages (Memory, NoPages);
>     ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
>       (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
>   }
> @@ -685,15 +689,19 @@ CoreFreePoolI (
>     UINTN       Granularity;
>     BOOLEAN     IsGuarded;
>     BOOLEAN     HasPoolTail;
> +  BOOLEAN     PageAsPool;
>   
>     ASSERT(Buffer != NULL);
>     //
>     // Get the head & tail of the pool entry
>     //
> -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
>     ASSERT(Head != NULL);
>   
> -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>       return EFI_INVALID_PARAMETER;
>     }
>   
> @@ -701,6 +709,7 @@ CoreFreePoolI (
>                   IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
>     HasPoolTail = !(IsGuarded &&
>                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>   
>     if (HasPoolTail) {
>       Tail = HEAD_TO_TAIL (Head);
> @@ -757,7 +766,7 @@ CoreFreePoolI (
>     //
>     // If it's not on the list, it must be pool pages
>     //
> -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
>   
>       //
>       // Return the memory pages back to free memory
> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> index 05eb4f422b..f6595c90ed 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>   #include <Guid/PropertiesTable.h>
>   
>   #include "DxeMain.h"
> +#include "HeapGuard.h"
>   
>   #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
>     ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> @@ -205,16 +206,13 @@ MergeMemoryMap (
>       NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
>   
>       do {
> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry->NumberOfPages));
> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry->PhysicalStart);
> +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry->NumberOfPages));
>         if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        if (NewMemoryMapEntry != MemoryMapEntry) {
> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        }
> -
> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
>           NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (NextMemoryMapEntry, DescriptorSize);
>           continue;
>         } else {
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
  2018-10-22  2:53   ` Zeng, Star
@ 2018-10-22  7:12     ` Wang, Jian J
  2018-10-22  8:23       ` Zeng, Star
  0 siblings, 1 reply; 16+ messages in thread
From: Wang, Jian J @ 2018-10-22  7:12 UTC (permalink / raw)
  To: Zeng, Star, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek

Star,

Thanks for the comment. Please see my opinions below.

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Monday, October 22, 2018 10:53 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
> memory detection
> 
> On 2018/10/19 9:50, Jian J Wang wrote:
> > UAF (Use-After-Free) memory detection is new feature introduced to
> > detect illegal access to memory which has been freed. The principle
> > behind is similar to heap guard feature, that is the core turn all pool
> > memory allocation to page allocation and mark them to be not-present
> > once they are freed.
> >
> > This also implies that, once a page is allocated and freed, it cannot
> > be re-allocated. This will bring another issue, which is that there's
> > risk that memory space will be used out. To address it, the memory
> > service add logic to put part (at most 64 pages a time) of freed pages
> > back into page pool, so that the memory service can still have memory
> > to allocate, when all memory space have been allocated once. This is
> > called memory promotion. The promoted pages are always from the eldest
> > pages which haven been freed.
> >
> > To use this feature, one can simply set following PCD to 1
> >
> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> >
> > Please note this feature cannot be used with heap guard feature controlled
> > by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> 
> I did not review the whole patch thoroughly. But I suggest that putting
> the code for UAF in a separated file if it is feasible. I am aware that
> UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
> HeapGuard.c and GuardPage.c (this file name is just for example)?
> 

I think we can take the UAF as a special kind of heap guard: guarding the
freed heap memory instead of guarding the allocated heap memory. So
UAF is a complement of Heap Guard or part of Heap Guard. From this
perspective, it's not necessary to put it in separate file.

> > ---
> >   MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
> >   MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
> >   MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
> ++++++++++++++++++++++++++-
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
> >   MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
> >   MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
> >   MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
> >   8 files changed, 519 insertions(+), 26 deletions(-)
> >
> > diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
> b/MdeModulePkg/Core/Dxe/DxeMain.h
> > index 2dec9da5e3..ae75cc5b25 100644
> > --- a/MdeModulePkg/Core/Dxe/DxeMain.h
> > +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
> > @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >   #include <Library/DxeServicesLib.h>
> >   #include <Library/DebugAgentLib.h>
> >   #include <Library/CpuExceptionHandlerLib.h>
> > +#include <Library/DebugPrintErrorLevelLib.h>
> >
> >
> >   //
> > diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
> b/MdeModulePkg/Core/Dxe/DxeMain.inf
> > index 10375443c0..d91258c087 100644
> > --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
> > +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
> > @@ -198,6 +198,7 @@
> >     gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType                       ##
> CONSUMES
> >     gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> ## CONSUMES
> >     gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ##
> CONSUMES
> > +
> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> ## CONSUMES
> >
> >   # [Hob]
> >   # RESOURCE_DESCRIPTOR   ## CONSUMES
> > diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > index d9c65a8045..e43ec74010 100644
> > --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
> >       EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
> >       UINTN                            Index;
> >
> > +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> > +      return;
> > +    }
> 
> I am aware the purpose of this change to optimize some code when
> DEBUG_GCD is not set.
> But I do not suggest we newly introduce the dependency to
> DebugPrintErrorLevelLib, I think this check should be hidden in the
> instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL (this
> macro name is just for example) can be introduced if we really want to
> do that.
> 

There're two purpose here, one is for optimization but another is to fix a true
issue here: GCD memory lock reentrance (see below for details)

   CoreDumpGcdMemorySpaceMap()
=> CoreGetMemorySpaceMap()
=> CoreAcquireGcdMemoryLock () *
   AllocatePool()
=> InternalAllocatePool()
=> CoreAllocatePool()
=> CoreAllocatePoolI()
=> CoreAllocatePoolPagesI()
=> CoreAllocatePoolPages()
=> FindFreePages()
=> PromoteMemoryResource()
=> CoreAcquireGcdMemoryLock() **

If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix above issue
in a different way. For example, change CoreGetMemorySpaceMap() to avoid lock
conflict, if possible. In addition, we could set PcdFixedDebugPrintErrorLevel to control
the DEBUG_GCD print.

> 
> Thanks,
> Star
> 
> > +
> >       Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
> &MemorySpaceMap);
> >       ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
> >
> > @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
> >       EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
> >       UINTN                        Index;
> >
> > +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> > +      return;
> > +    }
> > +
> >       Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
> >       ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > index 663f969c0d..b120c04f8f 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >   GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >                                       = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> >
> > +//
> > +// Used for Use-After-Free memory detection to promote freed but not used
> pages.
> > +//
> > +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
> mLastPromotedPage = BASE_4GB;
> > +
> >   /**
> >     Set corresponding bits in bitmap table to 1 according to the address.
> >
> > @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
> >
> >     @return An integer containing the guarded memory bitmap.
> >   **/
> > -UINTN
> > +UINT64
> >   GetGuardedMemoryBits (
> >     IN EFI_PHYSICAL_ADDRESS    Address,
> >     IN UINTN                   NumberOfPages
> > @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
> >   {
> >     UINT64            *BitMap;
> >     UINTN             Bits;
> > -  UINTN             Result;
> > +  UINT64            Result;
> >     UINTN             Shift;
> >     UINTN             BitsToUnitEnd;
> >
> > @@ -1203,6 +1208,373 @@ SetAllGuardPages (
> >     }
> >   }
> >
> > +/**
> > +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
> > +
> > +  @return TRUE/FALSE.
> > +**/
> > +BOOLEAN
> > +IsUafEnabled (
> > +  VOID
> > +  )
> > +{
> > +  ASSERT (!IsHeapGuardEnabled());
> > +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
> > +}
> > +
> > +/**
> > +  Find the address of top-most guarded free page.
> > +
> > +  @param[out]  Address    Start address of top-most guarded free page.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +GetLastGuardedFreePageAddress (
> > +  OUT EFI_PHYSICAL_ADDRESS      *Address
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> > +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> > +  UINTN                   Level;
> > +  UINT64                  Map;
> > +  INTN                    Index;
> > +
> > +  ASSERT (mMapLevel >= 1);
> > +
> > +  BaseAddress = 0;
> > +  Map = mGuardedMemoryMap;
> > +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> > +       ++Level) {
> > +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> > +
> > +    //
> > +    // Find the non-NULL entry at largest index.
> > +    //
> > +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> > +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> > +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> > +        Map = ((UINT64 *)(UINTN)Map)[Index];
> > +        break;
> > +      }
> > +    }
> > +  }
> > +
> > +  //
> > +  // Find the non-zero MSB then get the page address.
> > +  //
> > +  while (Map != 0) {
> > +    Map = RShiftU64 (Map, 1);
> > +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> > +  }
> > +
> > +  *Address = BaseAddress;
> > +}
> > +
> > +/**
> > +  Record freed pages.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +MarkFreedPages (
> > +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> > +  IN UINTN                    Pages
> > +  )
> > +{
> > +  SetGuardedMemoryBits (BaseAddress, Pages);
> > +}
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if possible.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  )
> > +{
> > +  EFI_STATUS      Status;
> > +
> > +  //
> > +  // Do nothing if
> > +  // - Use-After-Free detection is not enabled
> > +  // - Freed memory is legacy memory lower than 1MB
> > +  //
> > +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
> > +    return;
> > +  }
> > +
> > +  MarkFreedPages (BaseAddress, Pages);
> > +  if (gCpu != NULL) {
> > +    //
> > +    // Set flag to make sure allocating memory without GUARD for page table
> > +    // operation; otherwise infinite loops could be caused.
> > +    //
> > +    mOnGuarding = TRUE;
> > +    //
> > +    // Note: This might overwrite other attributes needed by other features,
> > +    // such as NX memory protection.
> > +    //
> > +    Status = gCpu->SetMemoryAttributes (
> > +                     gCpu,
> > +                     BaseAddress,
> > +                     EFI_PAGES_TO_SIZE(Pages),
> > +                     EFI_MEMORY_RP
> > +                     );
> > +    //
> > +    // Normally we should check the returned Status. But there might be
> memory
> > +    // alloc/free involved in SetMemoryAttributes(), which would fail this
> > +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> > +    //
> > +    if (EFI_ERROR (Status)) {
> > +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n",
> BaseAddress, Pages));
> > +    }
> > +    mOnGuarding = FALSE;
> > +  }
> > +}
> > +
> > +/**
> > +  Mark all pages freed before CPU Arch Protocol as not-present.
> > +
> > +**/
> > +VOID
> > +GuardAllFreedPages (
> > +  VOID
> > +  )
> > +{
> > +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    TableEntry;
> > +  UINT64    Address;
> > +  UINT64    GuardPage;
> > +  INTN      Level;
> > +  UINTN     BitIndex;
> > +  UINTN     GuardPageNumber;
> > +
> > +  if (mGuardedMemoryMap == 0 ||
> > +      mMapLevel == 0 ||
> > +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > +    return;
> > +  }
> > +
> > +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> > +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> > +
> > +  SetMem (Tables, sizeof(Tables), 0);
> > +  SetMem (Addresses, sizeof(Addresses), 0);
> > +  SetMem (Indices, sizeof(Indices), 0);
> > +
> > +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +  Tables[Level]   = mGuardedMemoryMap;
> > +  Address         = 0;
> > +  GuardPage       = (UINT64)-1;
> > +  GuardPageNumber = 0;
> > +
> > +  while (TRUE) {
> > +    if (Indices[Level] > Entries[Level]) {
> > +      Tables[Level] = 0;
> > +      Level        -= 1;
> > +    } else {
> > +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> > +      Address     = Addresses[Level];
> > +
> > +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +        Level            += 1;
> > +        Tables[Level]     = TableEntry;
> > +        Addresses[Level]  = Address;
> > +        Indices[Level]    = 0;
> > +
> > +        continue;
> > +      } else {
> > +        BitIndex = 1;
> > +        while (BitIndex != 0) {
> > +          if ((TableEntry & BitIndex) != 0) {
> > +            if (GuardPage == (UINT64)-1) {
> > +              GuardPage = Address;
> > +            }
> > +            ++GuardPageNumber;
> > +          } else if (GuardPageNumber > 0) {
> > +            GuardFreedPages (GuardPage, GuardPageNumber);
> > +            GuardPageNumber = 0;
> > +            GuardPage       = (UINT64)-1;
> > +          }
> > +
> > +          if (TableEntry == 0) {
> > +            break;
> > +          }
> > +
> > +          Address += EFI_PAGES_TO_SIZE (1);
> > +          BitIndex = LShiftU64(BitIndex, 1);
> > +        }
> > +      }
> > +    }
> > +
> > +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > +      break;
> > +    }
> > +
> > +    Indices[Level] += 1;
> > +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> > +
> > +  }
> > +
> > +  //
> > +  // Update the maximum address of freed page which can be used for
> memory
> > +  // promotion upon out-of-memory-space.
> > +  //
> > +  GetLastGuardedFreePageAddress (&Address);
> > +  if (Address != 0) {
> > +    mLastPromotedPage = Address;
> > +  }
> > +}
> > +
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS        EndAddress;
> > +  UINT64                      Bitmap;
> > +  INTN                        Pages;
> > +
> > +  if (!IsUafEnabled () ||
> > +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> > +    return;
> > +  }
> > +
> > +  Bitmap = 0;
> > +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
> >PhysicalStart);
> > +  Pages -= MemoryMapEntry->NumberOfPages;
> > +  while (Pages > 0) {
> > +    if (Bitmap == 0) {
> > +      EndAddress = MemoryMapEntry->PhysicalStart +
> > +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> > +      Bitmap = GetGuardedMemoryBits(EndAddress,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    if ((Bitmap & 1) == 0) {
> > +      break;
> > +    }
> > +
> > +    Pages--;
> > +    MemoryMapEntry->NumberOfPages++;
> > +    Bitmap = RShiftU64 (Bitmap, 1);
> > +  }
> > +}
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Use-After-Free detection makes use of 'Used then throw away' way to
> detect
> > +  any illegal access to freed memory. The thrown-away memory will be
> marked as
> > +  not-present so that any access to those memory (after free) will trigger
> > +  page-fault.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  )
> > +{
> > +  EFI_STATUS              Status;
> > +  UINTN                   AvailablePages;
> > +  UINT64                  Bitmap;
> > +  EFI_PHYSICAL_ADDRESS    Start;
> > +
> > +  if (!IsUafEnabled ()) {
> > +    return FALSE;
> > +  }
> > +
> > +  //
> > +  // Similar to memory allocation service, always search the freed pages in
> > +  // descending direction.
> > +  //
> > +  Start           = mLastPromotedPage;
> > +  AvailablePages  = 0;
> > +  while (AvailablePages == 0) {
> > +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    //
> > +    // If the address wraps around, try the really freed pages at top.
> > +    //
> > +    if (Start > mLastPromotedPage) {
> > +      GetLastGuardedFreePageAddress (&Start);
> > +      ASSERT (Start != 0);
> > +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    Bitmap = GetGuardedMemoryBits(Start,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    while (Bitmap > 0) {
> > +      if ((Bitmap & 1) != 0) {
> > +        ++AvailablePages;
> > +      } else if (AvailablePages == 0) {
> > +        Start += EFI_PAGES_TO_SIZE (1);
> > +      } else {
> > +        break;
> > +      }
> > +
> > +      Bitmap = RShiftU64(Bitmap, 1);
> > +    }
> > +  }
> > +
> > +  if (AvailablePages) {
> > +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start,
> AvailablePages));
> > +    if (gCpu != NULL) {
> > +      //
> > +      // Set flag to make sure allocating memory without GUARD for page table
> > +      // operation; otherwise infinite loops could be caused.
> > +      //
> > +      mOnGuarding = TRUE;
> > +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
> EFI_PAGES_TO_SIZE(AvailablePages), 0);
> > +      ASSERT_EFI_ERROR (Status);
> > +      mOnGuarding = FALSE;
> > +    }
> > +
> > +    mLastPromotedPage = Start;
> > +    *StartAddress     = Start;
> > +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> > +    return TRUE;
> > +  }
> > +
> > +  return FALSE;
> > +}
> > +
> >   /**
> >     Notify function used to set all Guard pages before CPU Arch Protocol
> installed.
> >   **/
> > @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
> >     )
> >   {
> >     ASSERT (gCpu != NULL);
> > -  SetAllGuardPages ();
> > +
> > +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
> > +    SetAllGuardPages ();
> > +  }
> > +
> > +  if (IsUafEnabled ()) {
> > +    GuardAllFreedPages ();
> > +  }
> >   }
> >
> >   /**
> > @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
> >     CHAR8     *Ruler1;
> >     CHAR8     *Ruler2;
> >
> > +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
> > +    return;
> > +  }
> > +
> > +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
> > +    return;
> > +  }
> > +
> >     if (mGuardedMemoryMap == 0 ||
> >         mMapLevel == 0 ||
> >         mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > index 8c34692439..3f2a5283c2 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
> >     VOID
> >     );
> >
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  );
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if possible.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  );
> > +
> > +/**
> > +  Check to see if the use-after-free (UAF) feature is enabled or not.
> > +
> > +  @return TRUE/FALSE.
> > +**/
> > +BOOLEAN
> > +IsUafEnabled (
> > +  VOID
> > +  );
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Use-After-Free detection makes use of 'Used then throw away' way to
> detect
> > +  any illegal access to freed memory. The thrown-away memory will be
> marked as
> > +  not-present so that any access to those memory (after free) will trigger
> > +  page-fault.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  );
> > +
> >   extern BOOLEAN mOnGuarding;
> >
> >   #endif
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
> b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > index 3b4cc08e7c..9f4419fa5b 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > @@ -401,9 +401,11 @@ PromoteMemoryResource (
> >     VOID
> >     )
> >   {
> > -  LIST_ENTRY         *Link;
> > -  EFI_GCD_MAP_ENTRY  *Entry;
> > -  BOOLEAN            Promoted;
> > +  LIST_ENTRY            *Link;
> > +  EFI_GCD_MAP_ENTRY     *Entry;
> > +  BOOLEAN               Promoted;
> > +  EFI_PHYSICAL_ADDRESS  StartAddress;
> > +  EFI_PHYSICAL_ADDRESS  EndAddress;
> >
> >     DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
> >
> > @@ -451,6 +453,24 @@ PromoteMemoryResource (
> >
> >     CoreReleaseGcdMemoryLock ();
> >
> > +  if (!Promoted) {
> > +    //
> > +    // If Use-After-Free detection is enabled, we might promote pages from
> > +    // guarded free pages.
> > +    //
> > +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> > +    if (Promoted) {
> > +      CoreAcquireGcdMemoryLock ();
> > +      CoreAddRange (
> > +        EfiConventionalMemory,
> > +        StartAddress,
> > +        EndAddress,
> > +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> EFI_MEMORY_WB
> > +        );
> > +      CoreReleaseGcdMemoryLock ();
> > +    }
> > +  }
> > +
> >     return Promoted;
> >   }
> >   /**
> > @@ -896,9 +916,13 @@ CoreConvertPagesEx (
> >       }
> >
> >       //
> > -    // Add our new range in
> > +    // Add our new range in. Don't do this for freed pages if Use-After-Free
> > +    // detection is enabled.
> >       //
> > -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    if (!IsUafEnabled () || !ChangingType || MemType !=
> EfiConventionalMemory) {
> > +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    }
> > +
> >       if (ChangingType && (MemType == EfiConventionalMemory)) {
> >         //
> >         // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
> this
> > @@ -1514,6 +1538,7 @@ CoreFreePages (
> >
> >     Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
> >     if (!EFI_ERROR (Status)) {
> > +    GuardFreedPages (Memory, NumberOfPages);
> >       CoreUpdateProfile (
> >         (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> >         MemoryProfileActionFreePages,
> > @@ -1908,9 +1933,7 @@ Done:
> >     *MemoryMapSize = BufferSize;
> >
> >     DEBUG_CODE (
> > -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> > -      DumpGuardedMemoryBitmap ();
> > -    }
> > +    DumpGuardedMemoryBitmap ();
> >     );
> >
> >     return Status;
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > index 1ff2061f7f..0cd74267b3 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > @@ -26,7 +26,8 @@ typedef struct {
> >   } POOL_FREE;
> >
> >
> > -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> > +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> > +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
> >   typedef struct {
> >     UINT32          Signature;
> >     UINT32          Reserved;
> > @@ -367,6 +368,7 @@ CoreAllocatePoolI (
> >     UINTN       NoPages;
> >     UINTN       Granularity;
> >     BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >     ASSERT_LOCKED (&mPoolMemoryLock);
> >
> > @@ -386,6 +388,7 @@ CoreAllocatePoolI (
> >
> >     HasPoolTail  = !(NeedGuard &&
> >                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
> >
> >     //
> >     // Adjusting the Size to be of proper alignment so that
> > @@ -406,7 +409,7 @@ CoreAllocatePoolI (
> >     // If allocation is over max size, just allocate pages for the request
> >     // (slow)
> >     //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
> >       if (!HasPoolTail) {
> >         Size -= sizeof (POOL_TAIL);
> >       }
> > @@ -498,7 +501,7 @@ Done:
> >       //
> >       // If we have a pool buffer, fill in the header & tail info
> >       //
> > -    Head->Signature = POOL_HEAD_SIGNATURE;
> > +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
> POOL_HEAD_SIGNATURE;
> >       Head->Size      = Size;
> >       Head->Type      = (EFI_MEMORY_TYPE) PoolType;
> >       Buffer          = Head->Data;
> > @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
> >     CoreFreePoolPages (Memory, NoPages);
> >     CoreReleaseMemoryLock ();
> >
> > +  GuardFreedPages (Memory, NoPages);
> >     ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
> >       (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
> >   }
> > @@ -685,15 +689,19 @@ CoreFreePoolI (
> >     UINTN       Granularity;
> >     BOOLEAN     IsGuarded;
> >     BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >     ASSERT(Buffer != NULL);
> >     //
> >     // Get the head & tail of the pool entry
> >     //
> > -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> > +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
> >     ASSERT(Head != NULL);
> >
> > -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> > +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> > +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> > +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> > +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >       return EFI_INVALID_PARAMETER;
> >     }
> >
> > @@ -701,6 +709,7 @@ CoreFreePoolI (
> >                   IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
> >     HasPoolTail = !(IsGuarded &&
> >                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >
> >     if (HasPoolTail) {
> >       Tail = HEAD_TO_TAIL (Head);
> > @@ -757,7 +766,7 @@ CoreFreePoolI (
> >     //
> >     // If it's not on the list, it must be pool pages
> >     //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
> >
> >       //
> >       // Return the memory pages back to free memory
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > index 05eb4f422b..f6595c90ed 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >   #include <Guid/PropertiesTable.h>
> >
> >   #include "DxeMain.h"
> > +#include "HeapGuard.h"
> >
> >   #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
> >     ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> > @@ -205,16 +206,13 @@ MergeMemoryMap (
> >       NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry,
> DescriptorSize);
> >
> >       do {
> > -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
> >NumberOfPages));
> > +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
> >PhysicalStart);
> > +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry-
> >NumberOfPages));
> >         if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> > -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> > -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        if (NewMemoryMapEntry != MemoryMapEntry) {
> > -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        }
> > -
> > +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
> &&
> > +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> >           NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> (NextMemoryMapEntry, DescriptorSize);
> >           continue;
> >         } else {
> >


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue
  2018-10-19 11:45   ` Laszlo Ersek
@ 2018-10-22  7:23     ` Wang, Jian J
  0 siblings, 0 replies; 16+ messages in thread
From: Wang, Jian J @ 2018-10-22  7:23 UTC (permalink / raw)
  To: Laszlo Ersek, edk2-devel@lists.01.org
  Cc: Zeng, Star, Kinney, Michael D, Yao, Jiewen, Ni, Ruiyu

Laszlo,

Thanks for the comments.

Regards,
Jian


> -----Original Message-----
> From: Laszlo Ersek [mailto:lersek@redhat.com]
> Sent: Friday, October 19, 2018 7:46 PM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Zeng, Star <star.zeng@intel.com>; Kinney, Michael D
> <michael.d.kinney@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>
> Subject: Re: [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue
> 
> On 10/19/18 03:50, Jian J Wang wrote:
> > The UAF (Use-After-Free) memory detection feature will cause an
> > infinite calling of InitializePageTablePool(). This is due to a
> > fact that AllocateAlignedPages() is used to allocate page table
> > pool memory. This function will most likely call gBS->FreePages
> > to free unaligned pages and then cause another round of page
> > attributes change, like below
> >
> >    FreePages() <===============|
> > => SetMemoryAttributes()       |
> 
> This should likely be "SetMemorySpaceAttributes" (the DXE service), or else
> "CpuSetMemoryAttributes" (the underlying CpuDxe function name).
> 

You're right. I'll change it.

> > => <out of page table>         |
> > => InitializePageTablePool()   |
> > => AllocateAlignedPages()      |
> > => FreePages() ================|
> >
> > The solution is add a lock in page table pool allocation function
> > and fail any other requests if it has not been done.
> 
> OK, but what is the end result? InitializePageTablePool() will return FALSE. How
> far back up is that error propagated? To what components will the error be
> visible?
> 
> BTW, I've found the following comment in CpuSetMemoryAttributes():
> 
>   //
>   // During memory attributes updating, new pages may be allocated to setup
>   // smaller granularity of page table. Page allocation action might then cause
>   // another calling of CpuSetMemoryAttributes() recursively, due to memory
>   // protection policy configured (such as PcdDxeNxMemoryProtectionPolicy).
>   // Since this driver will always protect memory used as page table by itself,
>   // there's no need to apply protection policy requested from memory service.
>   // So it's safe to just return EFI_SUCCESS if this time of calling is caused
>   // by page table memory allocation.
>   //
> 
> Is the current argument similar? I think it should be documented somehow.
> 

No, I don't think they're the similar. The issue I encountered here is that the code
tries to set freed memory as not-present but trapped in dead loop. The only
consequence here is that the freed pages in AllocateAlignedPages() cannot be
set as not-present. But it's ok because they're just allocated and haven't been
used by any other code.

> >
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >  UefiCpuPkg/CpuDxe/CpuPageTable.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> >
> > diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c
> b/UefiCpuPkg/CpuDxe/CpuPageTable.c
> > index 33e8ee2d2c..2145e623fa 100644
> > --- a/UefiCpuPkg/CpuDxe/CpuPageTable.c
> > +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c
> > @@ -100,6 +100,7 @@ PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
> >  };
> >
> >  PAGE_TABLE_POOL                   *mPageTablePool = NULL;
> > +EFI_LOCK                          mPageTablePoolLock =
> EFI_INITIALIZE_LOCK_VARIABLE (TPL_NOTIFY);
> 
> Why does this have to be an "EFI_LOCK"? Can't we just use a global variable? (I
> don't understand why messing with the TPL is necessary.)
> 
> In fact, I totally don't understand the point of EfiAcquireLock(). If we have two
> independent resources, each protected with its own separate lock, then why do
> both locks share the system-wide TPL between each other?
> 

Maybe you're right. Lock is a bit overkill. I'll try a global to find out if it's ok.

> 
> >  PAGE_TABLE_LIB_PAGING_CONTEXT     mPagingContext;
> >  EFI_SMM_BASE2_PROTOCOL            *mSmmBase2 = NULL;
> >
> > @@ -1045,6 +1046,12 @@ InitializePageTablePool (
> >  {
> >    VOID                      *Buffer;
> >    BOOLEAN                   IsModified;
> > +  EFI_STATUS                Status;
> > +
> > +  Status = EfiAcquireLockOrFail (&mPageTablePoolLock);
> > +  if (EFI_ERROR (Status)) {
> > +    return FALSE;
> > +  }
> >
> >    //
> >    // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one
> page for
> > @@ -1056,7 +1063,10 @@ InitializePageTablePool (
> >    Buffer = AllocateAlignedPages (PoolPages, PAGE_TABLE_POOL_ALIGNMENT);
> >    if (Buffer == NULL) {
> >      DEBUG ((DEBUG_ERROR, "ERROR: Out of aligned pages\r\n"));
> > +    EfiReleaseLock (&mPageTablePoolLock);
> 
> I feel that it would be safer to introduce a "Done" label at the bottom of the
> function, and release the lock there.
> 
> (Again, I'm not sure why this has to be a "lock".)
> 

Agree. I'll update this part of logic.

> >      return FALSE;
> > +  } else {
> > +    DEBUG ((DEBUG_INFO, "Paging: added %d pages to page table pool\r\n",
> PoolPages));
> 
> Please don't print UINTN values with %d. Cast them to UINT64 and log them
> with %lu.
> 

You got me again. That's a shame:( But thanks for point it out.

> >    }
> >
> >    //
> > @@ -1092,6 +1102,8 @@ InitializePageTablePool (
> >      );
> >    ASSERT (IsModified == TRUE);
> >
> > +  EfiReleaseLock (&mPageTablePoolLock);
> > +
> >    return TRUE;
> >  }
> >
> >
> 
> Thanks
> Laszlo

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
  2018-10-19 12:04   ` Laszlo Ersek
@ 2018-10-22  7:34     ` Wang, Jian J
  0 siblings, 0 replies; 16+ messages in thread
From: Wang, Jian J @ 2018-10-22  7:34 UTC (permalink / raw)
  To: Laszlo Ersek, edk2-devel@lists.01.org
  Cc: Zeng, Star, Kinney, Michael D, Yao, Jiewen, Ni, Ruiyu

Hi Laszlo,

Regards,
Jian


> -----Original Message-----
> From: Laszlo Ersek [mailto:lersek@redhat.com]
> Sent: Friday, October 19, 2018 8:05 PM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Zeng, Star <star.zeng@intel.com>; Kinney, Michael D
> <michael.d.kinney@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>
> Subject: Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory
> detection
> 
> On 10/19/18 03:50, Jian J Wang wrote:
> > UAF (Use-After-Free) memory detection is new feature introduced to
> > detect illegal access to memory which has been freed. The principle
> > behind is similar to heap guard feature, that is the core turn all pool
> > memory allocation to page allocation and mark them to be not-present
> > once they are freed.
> >
> > This also implies that, once a page is allocated and freed, it cannot
> > be re-allocated. This will bring another issue, which is that there's
> > risk that memory space will be used out. To address it, the memory
> > service add logic to put part (at most 64 pages a time) of freed pages
> > back into page pool, so that the memory service can still have memory
> > to allocate, when all memory space have been allocated once. This is
> > called memory promotion. The promoted pages are always from the eldest
> > pages which haven been freed.
> >
> > To use this feature, one can simply set following PCD to 1
> >   gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> >
> > Please note this feature cannot be used with heap guard feature controlled
> > by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >  MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
> >  MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
> >  MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
> >  MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
> ++++++++++++++++++++++++++-
> >  MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
> >  MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
> >  MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
> >  MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
> >  8 files changed, 519 insertions(+), 26 deletions(-)
> 
> I don't think I can review this patch well (I'm not an MdeModulePkg
> reviewer). However I do think this patch is way too large. I suggest
> splitting out everything that counts as "preparation" to separate patches.
> 
> For example:

Any comments are welcome.

> 
> > diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
> b/MdeModulePkg/Core/Dxe/DxeMain.h
> > index 2dec9da5e3..ae75cc5b25 100644
> > --- a/MdeModulePkg/Core/Dxe/DxeMain.h
> > +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
> > @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >  #include <Library/DxeServicesLib.h>
> >  #include <Library/DebugAgentLib.h>
> >  #include <Library/CpuExceptionHandlerLib.h>
> > +#include <Library/DebugPrintErrorLevelLib.h>
> >
> >
> >  //
> > diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
> b/MdeModulePkg/Core/Dxe/DxeMain.inf
> > index 10375443c0..d91258c087 100644
> > --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
> > +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
> > @@ -198,6 +198,7 @@
> >    gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType                       ##
> CONSUMES
> >    gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> ## CONSUMES
> >    gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ##
> CONSUMES
> > +
> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> ## CONSUMES
> >
> >  # [Hob]
> >  # RESOURCE_DESCRIPTOR   ## CONSUMES
> > diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > index d9c65a8045..e43ec74010 100644
> > --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
> >      EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
> >      UINTN                            Index;
> >
> > +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> > +      return;
> > +    }
> > +
> >      Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
> &MemorySpaceMap);
> >      ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
> >
> 
> this change doesn't belong to the UAF feature introduction. This change
> streamlines some debug code, which may or may not be useful, but either
> way, it should be presented separately, and discussed separately.
> 

There's issue here caused by introduction of UAF (lock reentrance, see below).
But you're right I should put them in separate patch.

   CoreDumpGcdMemorySpaceMap()
=> CoreGetMemorySpaceMap()
=> CoreAcquireGcdMemoryLock () *
   AllocatePool()
=> InternalAllocatePool()
=> CoreAllocatePool()
=> CoreAllocatePoolI()
=> CoreAllocatePoolPagesI()
=> CoreAllocatePoolPages()
=> FindFreePages()
=> PromoteMemoryResource()
=> CoreAcquireGcdMemoryLock() **

> > @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
> >      EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
> >      UINTN                        Index;
> >
> > +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> > +      return;
> > +    }
> > +
> >      Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
> >      ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > index 663f969c0d..b120c04f8f 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >  GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >                                      = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> >
> > +//
> > +// Used for Use-After-Free memory detection to promote freed but not used
> pages.
> > +//
> > +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
> mLastPromotedPage = BASE_4GB;
> > +
> >  /**
> >    Set corresponding bits in bitmap table to 1 according to the address.
> >
> > @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
> >
> >    @return An integer containing the guarded memory bitmap.
> >  **/
> > -UINTN
> > +UINT64
> >  GetGuardedMemoryBits (
> >    IN EFI_PHYSICAL_ADDRESS    Address,
> >    IN UINTN                   NumberOfPages
> > @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
> >  {
> >    UINT64            *BitMap;
> >    UINTN             Bits;
> > -  UINTN             Result;
> > +  UINT64            Result;
> >    UINTN             Shift;
> >    UINTN             BitsToUnitEnd;
> >
> > @@ -1203,6 +1208,373 @@ SetAllGuardPages (
> >    }
> >  }
> >
> > +/**
> > +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
> > +
> > +  @return TRUE/FALSE.
> > +**/
> > +BOOLEAN
> > +IsUafEnabled (
> > +  VOID
> > +  )
> > +{
> > +  ASSERT (!IsHeapGuardEnabled());
> 
> This seems strange. Do you really mean to assert that the heap guard is
> disabled *before* you check the UAF PCD setting?
> 
> ... Indeed, I see some confusing conditions related to this, below:
> 

I also found the logic here is not right after sending out the patch. I'll fix it.

> > +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
> > +}
> > +
> > +/**
> > +  Find the address of top-most guarded free page.
> > +
> > +  @param[out]  Address    Start address of top-most guarded free page.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +GetLastGuardedFreePageAddress (
> > +  OUT EFI_PHYSICAL_ADDRESS      *Address
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> > +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> > +  UINTN                   Level;
> > +  UINT64                  Map;
> > +  INTN                    Index;
> > +
> > +  ASSERT (mMapLevel >= 1);
> > +
> > +  BaseAddress = 0;
> > +  Map = mGuardedMemoryMap;
> > +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> > +       ++Level) {
> > +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> > +
> > +    //
> > +    // Find the non-NULL entry at largest index.
> > +    //
> > +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> > +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> > +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> > +        Map = ((UINT64 *)(UINTN)Map)[Index];
> > +        break;
> > +      }
> > +    }
> > +  }
> > +
> > +  //
> > +  // Find the non-zero MSB then get the page address.
> > +  //
> > +  while (Map != 0) {
> > +    Map = RShiftU64 (Map, 1);
> > +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> > +  }
> > +
> > +  *Address = BaseAddress;
> > +}
> > +
> > +/**
> > +  Record freed pages.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +MarkFreedPages (
> > +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> > +  IN UINTN                    Pages
> > +  )
> > +{
> > +  SetGuardedMemoryBits (BaseAddress, Pages);
> > +}
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if possible.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  )
> > +{
> > +  EFI_STATUS      Status;
> > +
> > +  //
> > +  // Do nothing if
> > +  // - Use-After-Free detection is not enabled
> > +  // - Freed memory is legacy memory lower than 1MB
> > +  //
> > +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
> > +    return;
> > +  }
> 
> OK, so remember this: GuardFreedPages() calls IsUafEnabled().
> 
> > +
> > +  MarkFreedPages (BaseAddress, Pages);
> > +  if (gCpu != NULL) {
> > +    //
> > +    // Set flag to make sure allocating memory without GUARD for page table
> > +    // operation; otherwise infinite loops could be caused.
> > +    //
> > +    mOnGuarding = TRUE;
> > +    //
> > +    // Note: This might overwrite other attributes needed by other features,
> > +    // such as NX memory protection.
> > +    //
> > +    Status = gCpu->SetMemoryAttributes (
> > +                     gCpu,
> > +                     BaseAddress,
> > +                     EFI_PAGES_TO_SIZE(Pages),
> > +                     EFI_MEMORY_RP
> > +                     );
> > +    //
> > +    // Normally we should check the returned Status. But there might be
> memory
> > +    // alloc/free involved in SetMemoryAttributes(), which would fail this
> > +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> > +    //
> > +    if (EFI_ERROR (Status)) {
> > +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n",
> BaseAddress, Pages));
> > +    }
> > +    mOnGuarding = FALSE;
> > +  }
> > +}
> > +
> > +/**
> > +  Mark all pages freed before CPU Arch Protocol as not-present.
> > +
> > +**/
> > +VOID
> > +GuardAllFreedPages (
> > +  VOID
> > +  )
> > +{
> > +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    TableEntry;
> > +  UINT64    Address;
> > +  UINT64    GuardPage;
> > +  INTN      Level;
> > +  UINTN     BitIndex;
> > +  UINTN     GuardPageNumber;
> > +
> > +  if (mGuardedMemoryMap == 0 ||
> > +      mMapLevel == 0 ||
> > +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > +    return;
> > +  }
> > +
> > +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> > +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> > +
> > +  SetMem (Tables, sizeof(Tables), 0);
> > +  SetMem (Addresses, sizeof(Addresses), 0);
> > +  SetMem (Indices, sizeof(Indices), 0);
> > +
> > +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +  Tables[Level]   = mGuardedMemoryMap;
> > +  Address         = 0;
> > +  GuardPage       = (UINT64)-1;
> > +  GuardPageNumber = 0;
> > +
> > +  while (TRUE) {
> > +    if (Indices[Level] > Entries[Level]) {
> > +      Tables[Level] = 0;
> > +      Level        -= 1;
> > +    } else {
> > +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> > +      Address     = Addresses[Level];
> > +
> > +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +        Level            += 1;
> > +        Tables[Level]     = TableEntry;
> > +        Addresses[Level]  = Address;
> > +        Indices[Level]    = 0;
> > +
> > +        continue;
> > +      } else {
> > +        BitIndex = 1;
> > +        while (BitIndex != 0) {
> > +          if ((TableEntry & BitIndex) != 0) {
> > +            if (GuardPage == (UINT64)-1) {
> > +              GuardPage = Address;
> > +            }
> > +            ++GuardPageNumber;
> > +          } else if (GuardPageNumber > 0) {
> > +            GuardFreedPages (GuardPage, GuardPageNumber);
> 
> Then GuardAllFreedPages() calls GuardFreedPages().
> 
> > +            GuardPageNumber = 0;
> > +            GuardPage       = (UINT64)-1;
> > +          }
> > +
> > +          if (TableEntry == 0) {
> > +            break;
> > +          }
> > +
> > +          Address += EFI_PAGES_TO_SIZE (1);
> > +          BitIndex = LShiftU64(BitIndex, 1);
> > +        }
> > +      }
> > +    }
> > +
> > +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > +      break;
> > +    }
> > +
> > +    Indices[Level] += 1;
> > +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> > +
> > +  }
> > +
> > +  //
> > +  // Update the maximum address of freed page which can be used for
> memory
> > +  // promotion upon out-of-memory-space.
> > +  //
> > +  GetLastGuardedFreePageAddress (&Address);
> > +  if (Address != 0) {
> > +    mLastPromotedPage = Address;
> > +  }
> > +}
> > +
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS        EndAddress;
> > +  UINT64                      Bitmap;
> > +  INTN                        Pages;
> > +
> > +  if (!IsUafEnabled () ||
> > +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> > +    return;
> > +  }
> > +
> > +  Bitmap = 0;
> > +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
> >PhysicalStart);
> > +  Pages -= MemoryMapEntry->NumberOfPages;
> > +  while (Pages > 0) {
> > +    if (Bitmap == 0) {
> > +      EndAddress = MemoryMapEntry->PhysicalStart +
> > +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> > +      Bitmap = GetGuardedMemoryBits(EndAddress,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    if ((Bitmap & 1) == 0) {
> > +      break;
> > +    }
> > +
> > +    Pages--;
> > +    MemoryMapEntry->NumberOfPages++;
> > +    Bitmap = RShiftU64 (Bitmap, 1);
> > +  }
> > +}
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Use-After-Free detection makes use of 'Used then throw away' way to
> detect
> > +  any illegal access to freed memory. The thrown-away memory will be
> marked as
> > +  not-present so that any access to those memory (after free) will trigger
> > +  page-fault.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  )
> > +{
> > +  EFI_STATUS              Status;
> > +  UINTN                   AvailablePages;
> > +  UINT64                  Bitmap;
> > +  EFI_PHYSICAL_ADDRESS    Start;
> > +
> > +  if (!IsUafEnabled ()) {
> > +    return FALSE;
> > +  }
> > +
> > +  //
> > +  // Similar to memory allocation service, always search the freed pages in
> > +  // descending direction.
> > +  //
> > +  Start           = mLastPromotedPage;
> > +  AvailablePages  = 0;
> > +  while (AvailablePages == 0) {
> > +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    //
> > +    // If the address wraps around, try the really freed pages at top.
> > +    //
> > +    if (Start > mLastPromotedPage) {
> > +      GetLastGuardedFreePageAddress (&Start);
> > +      ASSERT (Start != 0);
> > +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    Bitmap = GetGuardedMemoryBits(Start,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    while (Bitmap > 0) {
> > +      if ((Bitmap & 1) != 0) {
> > +        ++AvailablePages;
> > +      } else if (AvailablePages == 0) {
> > +        Start += EFI_PAGES_TO_SIZE (1);
> > +      } else {
> > +        break;
> > +      }
> > +
> > +      Bitmap = RShiftU64(Bitmap, 1);
> > +    }
> > +  }
> > +
> > +  if (AvailablePages) {
> > +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start,
> AvailablePages));
> > +    if (gCpu != NULL) {
> > +      //
> > +      // Set flag to make sure allocating memory without GUARD for page table
> > +      // operation; otherwise infinite loops could be caused.
> > +      //
> > +      mOnGuarding = TRUE;
> > +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
> EFI_PAGES_TO_SIZE(AvailablePages), 0);
> > +      ASSERT_EFI_ERROR (Status);
> > +      mOnGuarding = FALSE;
> > +    }
> > +
> > +    mLastPromotedPage = Start;
> > +    *StartAddress     = Start;
> > +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> > +    return TRUE;
> > +  }
> > +
> > +  return FALSE;
> > +}
> > +
> >  /**
> >    Notify function used to set all Guard pages before CPU Arch Protocol
> installed.
> >  **/
> > @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
> >    )
> >  {
> >    ASSERT (gCpu != NULL);
> > -  SetAllGuardPages ();
> > +
> > +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
> > +    SetAllGuardPages ();
> > +  }
> > +
> > +  if (IsUafEnabled ()) {
> > +    GuardAllFreedPages ();
> 
> But GuardAllFreedPages() is only called dependent on IsUafEnabled().
> 
> This means that the internal IsUafEnabled() call, in GuardFreedPages(),
> is superfluous -- at least when GuardFreedPages() is called from
> GuardAllFreedPages().
> 
> There are other places that call GuardFreedPages() too, and from those
> call sites, the internal IsUafEnabled() does seem necessary.
> 
> This is very confusing when someone would like to confirm that this
> patch "does nothing" when the PCD is clear.
> 
> I suggest the following improvement:
> 
> - Make IsUafEnabled() a robust checker function. It should be possible
> to call from any context, and it should only trip an assert (regarding
> the heap guard) when the UAF detection is in fact enabled.
> 
> - Introduce two variants of GuardFreedPages(). The first variant should
> *assume* that UAF detection is enabled. GuardAllFreedPages() should call
> this 1st variant. The 2nd variant should check UAF explicitly, and then
> call the 1st variant.
> 
> Basically, when I read through this patch, I should be able to mentally
> replace some function calls with constant FALSE.
> 
> Thanks
> Laszlo
> 
> 

Thanks for the thorough review. You give me some good suggestions. I'll
update the code to make it more clean.

> 
> > +  }
> >  }
> >
> >  /**
> > @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
> >    CHAR8     *Ruler1;
> >    CHAR8     *Ruler2;
> >
> > +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
> > +    return;
> > +  }
> > +
> > +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
> > +    return;
> > +  }
> > +
> >    if (mGuardedMemoryMap == 0 ||
> >        mMapLevel == 0 ||
> >        mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > index 8c34692439..3f2a5283c2 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
> >    VOID
> >    );
> >
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  );
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if possible.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  );
> > +
> > +/**
> > +  Check to see if the use-after-free (UAF) feature is enabled or not.
> > +
> > +  @return TRUE/FALSE.
> > +**/
> > +BOOLEAN
> > +IsUafEnabled (
> > +  VOID
> > +  );
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Use-After-Free detection makes use of 'Used then throw away' way to
> detect
> > +  any illegal access to freed memory. The thrown-away memory will be
> marked as
> > +  not-present so that any access to those memory (after free) will trigger
> > +  page-fault.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  );
> > +
> >  extern BOOLEAN mOnGuarding;
> >
> >  #endif
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
> b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > index 3b4cc08e7c..9f4419fa5b 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > @@ -401,9 +401,11 @@ PromoteMemoryResource (
> >    VOID
> >    )
> >  {
> > -  LIST_ENTRY         *Link;
> > -  EFI_GCD_MAP_ENTRY  *Entry;
> > -  BOOLEAN            Promoted;
> > +  LIST_ENTRY            *Link;
> > +  EFI_GCD_MAP_ENTRY     *Entry;
> > +  BOOLEAN               Promoted;
> > +  EFI_PHYSICAL_ADDRESS  StartAddress;
> > +  EFI_PHYSICAL_ADDRESS  EndAddress;
> >
> >    DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
> >
> > @@ -451,6 +453,24 @@ PromoteMemoryResource (
> >
> >    CoreReleaseGcdMemoryLock ();
> >
> > +  if (!Promoted) {
> > +    //
> > +    // If Use-After-Free detection is enabled, we might promote pages from
> > +    // guarded free pages.
> > +    //
> > +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> > +    if (Promoted) {
> > +      CoreAcquireGcdMemoryLock ();
> > +      CoreAddRange (
> > +        EfiConventionalMemory,
> > +        StartAddress,
> > +        EndAddress,
> > +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> EFI_MEMORY_WB
> > +        );
> > +      CoreReleaseGcdMemoryLock ();
> > +    }
> > +  }
> > +
> >    return Promoted;
> >  }
> >  /**
> > @@ -896,9 +916,13 @@ CoreConvertPagesEx (
> >      }
> >
> >      //
> > -    // Add our new range in
> > +    // Add our new range in. Don't do this for freed pages if Use-After-Free
> > +    // detection is enabled.
> >      //
> > -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    if (!IsUafEnabled () || !ChangingType || MemType !=
> EfiConventionalMemory) {
> > +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    }
> > +
> >      if (ChangingType && (MemType == EfiConventionalMemory)) {
> >        //
> >        // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
> > @@ -1514,6 +1538,7 @@ CoreFreePages (
> >
> >    Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
> >    if (!EFI_ERROR (Status)) {
> > +    GuardFreedPages (Memory, NumberOfPages);
> >      CoreUpdateProfile (
> >        (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> >        MemoryProfileActionFreePages,
> > @@ -1908,9 +1933,7 @@ Done:
> >    *MemoryMapSize = BufferSize;
> >
> >    DEBUG_CODE (
> > -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> > -      DumpGuardedMemoryBitmap ();
> > -    }
> > +    DumpGuardedMemoryBitmap ();
> >    );
> >
> >    return Status;
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > index 1ff2061f7f..0cd74267b3 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > @@ -26,7 +26,8 @@ typedef struct {
> >  } POOL_FREE;
> >
> >
> > -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> > +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> > +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
> >  typedef struct {
> >    UINT32          Signature;
> >    UINT32          Reserved;
> > @@ -367,6 +368,7 @@ CoreAllocatePoolI (
> >    UINTN       NoPages;
> >    UINTN       Granularity;
> >    BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >    ASSERT_LOCKED (&mPoolMemoryLock);
> >
> > @@ -386,6 +388,7 @@ CoreAllocatePoolI (
> >
> >    HasPoolTail  = !(NeedGuard &&
> >                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
> >
> >    //
> >    // Adjusting the Size to be of proper alignment so that
> > @@ -406,7 +409,7 @@ CoreAllocatePoolI (
> >    // If allocation is over max size, just allocate pages for the request
> >    // (slow)
> >    //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
> >      if (!HasPoolTail) {
> >        Size -= sizeof (POOL_TAIL);
> >      }
> > @@ -498,7 +501,7 @@ Done:
> >      //
> >      // If we have a pool buffer, fill in the header & tail info
> >      //
> > -    Head->Signature = POOL_HEAD_SIGNATURE;
> > +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
> POOL_HEAD_SIGNATURE;
> >      Head->Size      = Size;
> >      Head->Type      = (EFI_MEMORY_TYPE) PoolType;
> >      Buffer          = Head->Data;
> > @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
> >    CoreFreePoolPages (Memory, NoPages);
> >    CoreReleaseMemoryLock ();
> >
> > +  GuardFreedPages (Memory, NoPages);
> >    ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
> >      (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
> >  }
> > @@ -685,15 +689,19 @@ CoreFreePoolI (
> >    UINTN       Granularity;
> >    BOOLEAN     IsGuarded;
> >    BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >    ASSERT(Buffer != NULL);
> >    //
> >    // Get the head & tail of the pool entry
> >    //
> > -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> > +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
> >    ASSERT(Head != NULL);
> >
> > -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> > +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> > +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> > +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> > +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >      return EFI_INVALID_PARAMETER;
> >    }
> >
> > @@ -701,6 +709,7 @@ CoreFreePoolI (
> >                  IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
> >    HasPoolTail = !(IsGuarded &&
> >                    ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >
> >    if (HasPoolTail) {
> >      Tail = HEAD_TO_TAIL (Head);
> > @@ -757,7 +766,7 @@ CoreFreePoolI (
> >    //
> >    // If it's not on the list, it must be pool pages
> >    //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
> >
> >      //
> >      // Return the memory pages back to free memory
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > index 05eb4f422b..f6595c90ed 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >  #include <Guid/PropertiesTable.h>
> >
> >  #include "DxeMain.h"
> > +#include "HeapGuard.h"
> >
> >  #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
> >    ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> > @@ -205,16 +206,13 @@ MergeMemoryMap (
> >      NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry,
> DescriptorSize);
> >
> >      do {
> > -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
> >NumberOfPages));
> > +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
> >PhysicalStart);
> > +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry-
> >NumberOfPages));
> >        if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> > -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> > -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        if (NewMemoryMapEntry != MemoryMapEntry) {
> > -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        }
> > -
> > +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
> &&
> > +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> >          NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> (NextMemoryMapEntry, DescriptorSize);
> >          continue;
> >        } else {
> >


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
  2018-10-22  7:12     ` Wang, Jian J
@ 2018-10-22  8:23       ` Zeng, Star
  2018-10-23  1:24         ` Wang, Jian J
  0 siblings, 1 reply; 16+ messages in thread
From: Zeng, Star @ 2018-10-22  8:23 UTC (permalink / raw)
  To: Wang, Jian J, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek,
	star.zeng

On 2018/10/22 15:12, Wang, Jian J wrote:
> Star,
> 
> Thanks for the comment. Please see my opinions below.
> 
> Regards,
> Jian
> 
> 
>> -----Original Message-----
>> From: Zeng, Star
>> Sent: Monday, October 22, 2018 10:53 AM
>> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
>> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
>> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>> memory detection
>>
>> On 2018/10/19 9:50, Jian J Wang wrote:
>>> UAF (Use-After-Free) memory detection is new feature introduced to
>>> detect illegal access to memory which has been freed. The principle
>>> behind is similar to heap guard feature, that is the core turn all pool
>>> memory allocation to page allocation and mark them to be not-present
>>> once they are freed.
>>>
>>> This also implies that, once a page is allocated and freed, it cannot
>>> be re-allocated. This will bring another issue, which is that there's
>>> risk that memory space will be used out. To address it, the memory
>>> service add logic to put part (at most 64 pages a time) of freed pages
>>> back into page pool, so that the memory service can still have memory
>>> to allocate, when all memory space have been allocated once. This is
>>> called memory promotion. The promoted pages are always from the eldest
>>> pages which haven been freed.
>>>
>>> To use this feature, one can simply set following PCD to 1
>>>
>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>>>
>>> Please note this feature cannot be used with heap guard feature controlled
>>> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
>>>
>>> Cc: Star Zeng <star.zeng@intel.com>
>>> Cc: Michael D Kinney <michael.d.kinney@intel.com>
>>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>>> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
>>> Cc: Laszlo Ersek <lersek@redhat.com>
>>> Contributed-under: TianoCore Contribution Agreement 1.1
>>> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
>>
>> I did not review the whole patch thoroughly. But I suggest that putting
>> the code for UAF in a separated file if it is feasible. I am aware that
>> UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
>> HeapGuard.c and GuardPage.c (this file name is just for example)?
>>
> 
> I think we can take the UAF as a special kind of heap guard: guarding the
> freed heap memory instead of guarding the allocated heap memory. So
> UAF is a complement of Heap Guard or part of Heap Guard. From this
> perspective, it's not necessary to put it in separate file.

Very good point. From this perspective, I agree.
Then I would like to suggest not introducing a new PCD 
PcdUseAfterFreeDetectionPropertyMask, but use one BIT of 
PcdHeapGuardPropertyMask for it since we think it is part of Heap Guard.
The benefits are
1. No confusion between Heap Guard and Use After Free.
2. No need introduce new PCD.
3. Can reuse BIT6 - Enable non-stop mode.
4. No need update DxeIplPeim to enable IA32 PAE paging.

> 
>>> ---
>>>    MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
>>>    MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
>>>    MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
>>>    MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
>> ++++++++++++++++++++++++++-
>>>    MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
>>>    MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
>>>    MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
>>>    MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
>>>    8 files changed, 519 insertions(+), 26 deletions(-)
>>>
>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
>> b/MdeModulePkg/Core/Dxe/DxeMain.h
>>> index 2dec9da5e3..ae75cc5b25 100644
>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.h
>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
>>> @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
>> KIND, EITHER EXPRESS OR IMPLIED.
>>>    #include <Library/DxeServicesLib.h>
>>>    #include <Library/DebugAgentLib.h>
>>>    #include <Library/CpuExceptionHandlerLib.h>
>>> +#include <Library/DebugPrintErrorLevelLib.h>
>>>
>>>
>>>    //
>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
>> b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>> index 10375443c0..d91258c087 100644
>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>> @@ -198,6 +198,7 @@
>>>      gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType                       ##
>> CONSUMES
>>>      gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
>> ## CONSUMES
>>>      gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ##
>> CONSUMES
>>> +
>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>> ## CONSUMES
>>>
>>>    # [Hob]
>>>    # RESOURCE_DESCRIPTOR   ## CONSUMES
>>> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>> index d9c65a8045..e43ec74010 100644
>>> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>> @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
>>>        EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
>>>        UINTN                            Index;
>>>
>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>> +      return;
>>> +    }
>>
>> I am aware the purpose of this change to optimize some code when
>> DEBUG_GCD is not set.
>> But I do not suggest we newly introduce the dependency to
>> DebugPrintErrorLevelLib, I think this check should be hidden in the
>> instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL (this
>> macro name is just for example) can be introduced if we really want to
>> do that.
>>
> 
> There're two purpose here, one is for optimization but another is to fix a true
> issue here: GCD memory lock reentrance (see below for details)
> 
>     CoreDumpGcdMemorySpaceMap()
> => CoreGetMemorySpaceMap()
> => CoreAcquireGcdMemoryLock () *
>     AllocatePool()
> => InternalAllocatePool()
> => CoreAllocatePool()
> => CoreAllocatePoolI()
> => CoreAllocatePoolPagesI()
> => CoreAllocatePoolPages()
> => FindFreePages()
> => PromoteMemoryResource()
> => CoreAcquireGcdMemoryLock() **
> 
> If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix above issue
> in a different way. For example, change CoreGetMemorySpaceMap() to avoid lock
> conflict, if possible. In addition, we could set PcdFixedDebugPrintErrorLevel to control
> the DEBUG_GCD print.

Got it, thanks.
I still not suggest using GetDebugPrintErrorLevel with new 
DebugPrintErrorLevelLib dependency.
Seemingly, the problem even could not be resolved thoroughly because it 
may still happen when DEBUG_GCD is set, right?

A possible solution is adding CoreAcquireGcdMemoryLockOrFail, updating 
PromoteMemoryResource to use CoreAcquireGcdMemoryLockOrFail, and 
updating the ASSERT in CoreDumpGcdMemorySpaceMap with error check.
Maybe there is other good idea.


Thanks,
Star

> 
>>
>> Thanks,
>> Star
>>
>>> +
>>>        Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
>> &MemorySpaceMap);
>>>        ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
>>>
>>> @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
>>>        EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
>>>        UINTN                        Index;
>>>
>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>> +      return;
>>> +    }
>>> +
>>>        Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
>>>        ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
>>>
>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>> index 663f969c0d..b120c04f8f 100644
>>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>    GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>                                        = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>>>
>>> +//
>>> +// Used for Use-After-Free memory detection to promote freed but not used
>> pages.
>>> +//
>>> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
>> mLastPromotedPage = BASE_4GB;
>>> +
>>>    /**
>>>      Set corresponding bits in bitmap table to 1 according to the address.
>>>
>>> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
>>>
>>>      @return An integer containing the guarded memory bitmap.
>>>    **/
>>> -UINTN
>>> +UINT64
>>>    GetGuardedMemoryBits (
>>>      IN EFI_PHYSICAL_ADDRESS    Address,
>>>      IN UINTN                   NumberOfPages
>>> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
>>>    {
>>>      UINT64            *BitMap;
>>>      UINTN             Bits;
>>> -  UINTN             Result;
>>> +  UINT64            Result;
>>>      UINTN             Shift;
>>>      UINTN             BitsToUnitEnd;
>>>
>>> @@ -1203,6 +1208,373 @@ SetAllGuardPages (
>>>      }
>>>    }
>>>
>>> +/**
>>> +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
>>> +
>>> +  @return TRUE/FALSE.
>>> +**/
>>> +BOOLEAN
>>> +IsUafEnabled (
>>> +  VOID
>>> +  )
>>> +{
>>> +  ASSERT (!IsHeapGuardEnabled());
>>> +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
>>> +}
>>> +
>>> +/**
>>> +  Find the address of top-most guarded free page.
>>> +
>>> +  @param[out]  Address    Start address of top-most guarded free page.
>>> +
>>> +  @return VOID.
>>> +**/
>>> +VOID
>>> +GetLastGuardedFreePageAddress (
>>> +  OUT EFI_PHYSICAL_ADDRESS      *Address
>>> +  )
>>> +{
>>> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
>>> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
>>> +  UINTN                   Level;
>>> +  UINT64                  Map;
>>> +  INTN                    Index;
>>> +
>>> +  ASSERT (mMapLevel >= 1);
>>> +
>>> +  BaseAddress = 0;
>>> +  Map = mGuardedMemoryMap;
>>> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
>>> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
>>> +       ++Level) {
>>> +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
>>> +
>>> +    //
>>> +    // Find the non-NULL entry at largest index.
>>> +    //
>>> +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
>>> +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
>>> +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
>>> +        Map = ((UINT64 *)(UINTN)Map)[Index];
>>> +        break;
>>> +      }
>>> +    }
>>> +  }
>>> +
>>> +  //
>>> +  // Find the non-zero MSB then get the page address.
>>> +  //
>>> +  while (Map != 0) {
>>> +    Map = RShiftU64 (Map, 1);
>>> +    BaseAddress += EFI_PAGES_TO_SIZE (1);
>>> +  }
>>> +
>>> +  *Address = BaseAddress;
>>> +}
>>> +
>>> +/**
>>> +  Record freed pages.
>>> +
>>> +  @param[in]  BaseAddress   Base address of just freed pages.
>>> +  @param[in]  Pages         Number of freed pages.
>>> +
>>> +  @return VOID.
>>> +**/
>>> +VOID
>>> +MarkFreedPages (
>>> +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
>>> +  IN UINTN                    Pages
>>> +  )
>>> +{
>>> +  SetGuardedMemoryBits (BaseAddress, Pages);
>>> +}
>>> +
>>> +/**
>>> +  Record freed pages as well as mark them as not-present, if possible.
>>> +
>>> +  @param[in]  BaseAddress   Base address of just freed pages.
>>> +  @param[in]  Pages         Number of freed pages.
>>> +
>>> +  @return VOID.
>>> +**/
>>> +VOID
>>> +EFIAPI
>>> +GuardFreedPages (
>>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
>>> +  IN  UINTN                   Pages
>>> +  )
>>> +{
>>> +  EFI_STATUS      Status;
>>> +
>>> +  //
>>> +  // Do nothing if
>>> +  // - Use-After-Free detection is not enabled
>>> +  // - Freed memory is legacy memory lower than 1MB
>>> +  //
>>> +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
>>> +    return;
>>> +  }
>>> +
>>> +  MarkFreedPages (BaseAddress, Pages);
>>> +  if (gCpu != NULL) {
>>> +    //
>>> +    // Set flag to make sure allocating memory without GUARD for page table
>>> +    // operation; otherwise infinite loops could be caused.
>>> +    //
>>> +    mOnGuarding = TRUE;
>>> +    //
>>> +    // Note: This might overwrite other attributes needed by other features,
>>> +    // such as NX memory protection.
>>> +    //
>>> +    Status = gCpu->SetMemoryAttributes (
>>> +                     gCpu,
>>> +                     BaseAddress,
>>> +                     EFI_PAGES_TO_SIZE(Pages),
>>> +                     EFI_MEMORY_RP
>>> +                     );
>>> +    //
>>> +    // Normally we should check the returned Status. But there might be
>> memory
>>> +    // alloc/free involved in SetMemoryAttributes(), which would fail this
>>> +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
>>> +    //
>>> +    if (EFI_ERROR (Status)) {
>>> +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n",
>> BaseAddress, Pages));
>>> +    }
>>> +    mOnGuarding = FALSE;
>>> +  }
>>> +}
>>> +
>>> +/**
>>> +  Mark all pages freed before CPU Arch Protocol as not-present.
>>> +
>>> +**/
>>> +VOID
>>> +GuardAllFreedPages (
>>> +  VOID
>>> +  )
>>> +{
>>> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>> +  UINT64    TableEntry;
>>> +  UINT64    Address;
>>> +  UINT64    GuardPage;
>>> +  INTN      Level;
>>> +  UINTN     BitIndex;
>>> +  UINTN     GuardPageNumber;
>>> +
>>> +  if (mGuardedMemoryMap == 0 ||
>>> +      mMapLevel == 0 ||
>>> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
>>> +    return;
>>> +  }
>>> +
>>> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
>>> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
>>> +
>>> +  SetMem (Tables, sizeof(Tables), 0);
>>> +  SetMem (Addresses, sizeof(Addresses), 0);
>>> +  SetMem (Indices, sizeof(Indices), 0);
>>> +
>>> +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
>>> +  Tables[Level]   = mGuardedMemoryMap;
>>> +  Address         = 0;
>>> +  GuardPage       = (UINT64)-1;
>>> +  GuardPageNumber = 0;
>>> +
>>> +  while (TRUE) {
>>> +    if (Indices[Level] > Entries[Level]) {
>>> +      Tables[Level] = 0;
>>> +      Level        -= 1;
>>> +    } else {
>>> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
>>> +      Address     = Addresses[Level];
>>> +
>>> +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
>>> +        Level            += 1;
>>> +        Tables[Level]     = TableEntry;
>>> +        Addresses[Level]  = Address;
>>> +        Indices[Level]    = 0;
>>> +
>>> +        continue;
>>> +      } else {
>>> +        BitIndex = 1;
>>> +        while (BitIndex != 0) {
>>> +          if ((TableEntry & BitIndex) != 0) {
>>> +            if (GuardPage == (UINT64)-1) {
>>> +              GuardPage = Address;
>>> +            }
>>> +            ++GuardPageNumber;
>>> +          } else if (GuardPageNumber > 0) {
>>> +            GuardFreedPages (GuardPage, GuardPageNumber);
>>> +            GuardPageNumber = 0;
>>> +            GuardPage       = (UINT64)-1;
>>> +          }
>>> +
>>> +          if (TableEntry == 0) {
>>> +            break;
>>> +          }
>>> +
>>> +          Address += EFI_PAGES_TO_SIZE (1);
>>> +          BitIndex = LShiftU64(BitIndex, 1);
>>> +        }
>>> +      }
>>> +    }
>>> +
>>> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
>>> +      break;
>>> +    }
>>> +
>>> +    Indices[Level] += 1;
>>> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
>>> +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
>>> +
>>> +  }
>>> +
>>> +  //
>>> +  // Update the maximum address of freed page which can be used for
>> memory
>>> +  // promotion upon out-of-memory-space.
>>> +  //
>>> +  GetLastGuardedFreePageAddress (&Address);
>>> +  if (Address != 0) {
>>> +    mLastPromotedPage = Address;
>>> +  }
>>> +}
>>> +
>>> +/**
>>> +  This function checks to see if the given memory map descriptor in a memory
>> map
>>> +  can be merged with any guarded free pages.
>>> +
>>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
>>> +  @param  MaxAddress        Maximum address to stop the merge.
>>> +
>>> +  @return VOID
>>> +
>>> +**/
>>> +VOID
>>> +MergeGuardPages (
>>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
>>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
>>> +  )
>>> +{
>>> +  EFI_PHYSICAL_ADDRESS        EndAddress;
>>> +  UINT64                      Bitmap;
>>> +  INTN                        Pages;
>>> +
>>> +  if (!IsUafEnabled () ||
>>> +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
>>> +    return;
>>> +  }
>>> +
>>> +  Bitmap = 0;
>>> +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
>>> PhysicalStart);
>>> +  Pages -= MemoryMapEntry->NumberOfPages;
>>> +  while (Pages > 0) {
>>> +    if (Bitmap == 0) {
>>> +      EndAddress = MemoryMapEntry->PhysicalStart +
>>> +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
>>> +      Bitmap = GetGuardedMemoryBits(EndAddress,
>> GUARDED_HEAP_MAP_ENTRY_BITS);
>>> +    }
>>> +
>>> +    if ((Bitmap & 1) == 0) {
>>> +      break;
>>> +    }
>>> +
>>> +    Pages--;
>>> +    MemoryMapEntry->NumberOfPages++;
>>> +    Bitmap = RShiftU64 (Bitmap, 1);
>>> +  }
>>> +}
>>> +
>>> +/**
>>> +  Put part (at most 64 pages a time) guarded free pages back to free page
>> pool.
>>> +
>>> +  Use-After-Free detection makes use of 'Used then throw away' way to
>> detect
>>> +  any illegal access to freed memory. The thrown-away memory will be
>> marked as
>>> +  not-present so that any access to those memory (after free) will trigger
>>> +  page-fault.
>>> +
>>> +  The problem is that this will consume lots of memory space. Once no
>> memory
>>> +  left in pool to allocate, we have to restore part of the freed pages to their
>>> +  normal function. Otherwise the whole system will stop functioning.
>>> +
>>> +  @param  StartAddress    Start address of promoted memory.
>>> +  @param  EndAddress      End address of promoted memory.
>>> +
>>> +  @return TRUE    Succeeded to promote memory.
>>> +  @return FALSE   No free memory found.
>>> +
>>> +**/
>>> +BOOLEAN
>>> +PromoteGuardedFreePages (
>>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
>>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
>>> +  )
>>> +{
>>> +  EFI_STATUS              Status;
>>> +  UINTN                   AvailablePages;
>>> +  UINT64                  Bitmap;
>>> +  EFI_PHYSICAL_ADDRESS    Start;
>>> +
>>> +  if (!IsUafEnabled ()) {
>>> +    return FALSE;
>>> +  }
>>> +
>>> +  //
>>> +  // Similar to memory allocation service, always search the freed pages in
>>> +  // descending direction.
>>> +  //
>>> +  Start           = mLastPromotedPage;
>>> +  AvailablePages  = 0;
>>> +  while (AvailablePages == 0) {
>>> +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
>>> +    //
>>> +    // If the address wraps around, try the really freed pages at top.
>>> +    //
>>> +    if (Start > mLastPromotedPage) {
>>> +      GetLastGuardedFreePageAddress (&Start);
>>> +      ASSERT (Start != 0);
>>> +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
>>> +    }
>>> +
>>> +    Bitmap = GetGuardedMemoryBits(Start,
>> GUARDED_HEAP_MAP_ENTRY_BITS);
>>> +    while (Bitmap > 0) {
>>> +      if ((Bitmap & 1) != 0) {
>>> +        ++AvailablePages;
>>> +      } else if (AvailablePages == 0) {
>>> +        Start += EFI_PAGES_TO_SIZE (1);
>>> +      } else {
>>> +        break;
>>> +      }
>>> +
>>> +      Bitmap = RShiftU64(Bitmap, 1);
>>> +    }
>>> +  }
>>> +
>>> +  if (AvailablePages) {
>>> +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start,
>> AvailablePages));
>>> +    if (gCpu != NULL) {
>>> +      //
>>> +      // Set flag to make sure allocating memory without GUARD for page table
>>> +      // operation; otherwise infinite loops could be caused.
>>> +      //
>>> +      mOnGuarding = TRUE;
>>> +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
>> EFI_PAGES_TO_SIZE(AvailablePages), 0);
>>> +      ASSERT_EFI_ERROR (Status);
>>> +      mOnGuarding = FALSE;
>>> +    }
>>> +
>>> +    mLastPromotedPage = Start;
>>> +    *StartAddress     = Start;
>>> +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
>>> +    return TRUE;
>>> +  }
>>> +
>>> +  return FALSE;
>>> +}
>>> +
>>>    /**
>>>      Notify function used to set all Guard pages before CPU Arch Protocol
>> installed.
>>>    **/
>>> @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
>>>      )
>>>    {
>>>      ASSERT (gCpu != NULL);
>>> -  SetAllGuardPages ();
>>> +
>>> +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
>>> +    SetAllGuardPages ();
>>> +  }
>>> +
>>> +  if (IsUafEnabled ()) {
>>> +    GuardAllFreedPages ();
>>> +  }
>>>    }
>>>
>>>    /**
>>> @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
>>>      CHAR8     *Ruler1;
>>>      CHAR8     *Ruler2;
>>>
>>> +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
>>> +    return;
>>> +  }
>>> +
>>> +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
>>> +    return;
>>> +  }
>>> +
>>>      if (mGuardedMemoryMap == 0 ||
>>>          mMapLevel == 0 ||
>>>          mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>> index 8c34692439..3f2a5283c2 100644
>>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>> @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
>>>      VOID
>>>      );
>>>
>>> +/**
>>> +  This function checks to see if the given memory map descriptor in a memory
>> map
>>> +  can be merged with any guarded free pages.
>>> +
>>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
>>> +  @param  MaxAddress        Maximum address to stop the merge.
>>> +
>>> +  @return VOID
>>> +
>>> +**/
>>> +VOID
>>> +MergeGuardPages (
>>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
>>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
>>> +  );
>>> +
>>> +/**
>>> +  Record freed pages as well as mark them as not-present, if possible.
>>> +
>>> +  @param[in]  BaseAddress   Base address of just freed pages.
>>> +  @param[in]  Pages         Number of freed pages.
>>> +
>>> +  @return VOID.
>>> +**/
>>> +VOID
>>> +EFIAPI
>>> +GuardFreedPages (
>>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
>>> +  IN  UINTN                   Pages
>>> +  );
>>> +
>>> +/**
>>> +  Check to see if the use-after-free (UAF) feature is enabled or not.
>>> +
>>> +  @return TRUE/FALSE.
>>> +**/
>>> +BOOLEAN
>>> +IsUafEnabled (
>>> +  VOID
>>> +  );
>>> +
>>> +/**
>>> +  Put part (at most 64 pages a time) guarded free pages back to free page
>> pool.
>>> +
>>> +  Use-After-Free detection makes use of 'Used then throw away' way to
>> detect
>>> +  any illegal access to freed memory. The thrown-away memory will be
>> marked as
>>> +  not-present so that any access to those memory (after free) will trigger
>>> +  page-fault.
>>> +
>>> +  The problem is that this will consume lots of memory space. Once no
>> memory
>>> +  left in pool to allocate, we have to restore part of the freed pages to their
>>> +  normal function. Otherwise the whole system will stop functioning.
>>> +
>>> +  @param  StartAddress    Start address of promoted memory.
>>> +  @param  EndAddress      End address of promoted memory.
>>> +
>>> +  @return TRUE    Succeeded to promote memory.
>>> +  @return FALSE   No free memory found.
>>> +
>>> +**/
>>> +BOOLEAN
>>> +PromoteGuardedFreePages (
>>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
>>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
>>> +  );
>>> +
>>>    extern BOOLEAN mOnGuarding;
>>>
>>>    #endif
>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
>> b/MdeModulePkg/Core/Dxe/Mem/Page.c
>>> index 3b4cc08e7c..9f4419fa5b 100644
>>> --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
>>> +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
>>> @@ -401,9 +401,11 @@ PromoteMemoryResource (
>>>      VOID
>>>      )
>>>    {
>>> -  LIST_ENTRY         *Link;
>>> -  EFI_GCD_MAP_ENTRY  *Entry;
>>> -  BOOLEAN            Promoted;
>>> +  LIST_ENTRY            *Link;
>>> +  EFI_GCD_MAP_ENTRY     *Entry;
>>> +  BOOLEAN               Promoted;
>>> +  EFI_PHYSICAL_ADDRESS  StartAddress;
>>> +  EFI_PHYSICAL_ADDRESS  EndAddress;
>>>
>>>      DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
>>>
>>> @@ -451,6 +453,24 @@ PromoteMemoryResource (
>>>
>>>      CoreReleaseGcdMemoryLock ();
>>>
>>> +  if (!Promoted) {
>>> +    //
>>> +    // If Use-After-Free detection is enabled, we might promote pages from
>>> +    // guarded free pages.
>>> +    //
>>> +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
>>> +    if (Promoted) {
>>> +      CoreAcquireGcdMemoryLock ();
>>> +      CoreAddRange (
>>> +        EfiConventionalMemory,
>>> +        StartAddress,
>>> +        EndAddress,
>>> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
>> EFI_MEMORY_WB
>>> +        );
>>> +      CoreReleaseGcdMemoryLock ();
>>> +    }
>>> +  }
>>> +
>>>      return Promoted;
>>>    }
>>>    /**
>>> @@ -896,9 +916,13 @@ CoreConvertPagesEx (
>>>        }
>>>
>>>        //
>>> -    // Add our new range in
>>> +    // Add our new range in. Don't do this for freed pages if Use-After-Free
>>> +    // detection is enabled.
>>>        //
>>> -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
>>> +    if (!IsUafEnabled () || !ChangingType || MemType !=
>> EfiConventionalMemory) {
>>> +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
>>> +    }
>>> +
>>>        if (ChangingType && (MemType == EfiConventionalMemory)) {
>>>          //
>>>          // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
>> this
>>> @@ -1514,6 +1538,7 @@ CoreFreePages (
>>>
>>>      Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
>>>      if (!EFI_ERROR (Status)) {
>>> +    GuardFreedPages (Memory, NumberOfPages);
>>>        CoreUpdateProfile (
>>>          (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
>>>          MemoryProfileActionFreePages,
>>> @@ -1908,9 +1933,7 @@ Done:
>>>      *MemoryMapSize = BufferSize;
>>>
>>>      DEBUG_CODE (
>>> -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
>>> -      DumpGuardedMemoryBitmap ();
>>> -    }
>>> +    DumpGuardedMemoryBitmap ();
>>>      );
>>>
>>>      return Status;
>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
>> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>> index 1ff2061f7f..0cd74267b3 100644
>>> --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>> +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>> @@ -26,7 +26,8 @@ typedef struct {
>>>    } POOL_FREE;
>>>
>>>
>>> -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
>>> +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
>>> +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
>>>    typedef struct {
>>>      UINT32          Signature;
>>>      UINT32          Reserved;
>>> @@ -367,6 +368,7 @@ CoreAllocatePoolI (
>>>      UINTN       NoPages;
>>>      UINTN       Granularity;
>>>      BOOLEAN     HasPoolTail;
>>> +  BOOLEAN     PageAsPool;
>>>
>>>      ASSERT_LOCKED (&mPoolMemoryLock);
>>>
>>> @@ -386,6 +388,7 @@ CoreAllocatePoolI (
>>>
>>>      HasPoolTail  = !(NeedGuard &&
>>>                       ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
>>> +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
>>>
>>>      //
>>>      // Adjusting the Size to be of proper alignment so that
>>> @@ -406,7 +409,7 @@ CoreAllocatePoolI (
>>>      // If allocation is over max size, just allocate pages for the request
>>>      // (slow)
>>>      //
>>> -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
>>> +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
>>>        if (!HasPoolTail) {
>>>          Size -= sizeof (POOL_TAIL);
>>>        }
>>> @@ -498,7 +501,7 @@ Done:
>>>        //
>>>        // If we have a pool buffer, fill in the header & tail info
>>>        //
>>> -    Head->Signature = POOL_HEAD_SIGNATURE;
>>> +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
>> POOL_HEAD_SIGNATURE;
>>>        Head->Size      = Size;
>>>        Head->Type      = (EFI_MEMORY_TYPE) PoolType;
>>>        Buffer          = Head->Data;
>>> @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
>>>      CoreFreePoolPages (Memory, NoPages);
>>>      CoreReleaseMemoryLock ();
>>>
>>> +  GuardFreedPages (Memory, NoPages);
>>>      ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
>>>        (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
>>>    }
>>> @@ -685,15 +689,19 @@ CoreFreePoolI (
>>>      UINTN       Granularity;
>>>      BOOLEAN     IsGuarded;
>>>      BOOLEAN     HasPoolTail;
>>> +  BOOLEAN     PageAsPool;
>>>
>>>      ASSERT(Buffer != NULL);
>>>      //
>>>      // Get the head & tail of the pool entry
>>>      //
>>> -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
>>> +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
>>>      ASSERT(Head != NULL);
>>>
>>> -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
>>> +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
>>> +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
>>> +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
>>> +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>>>        return EFI_INVALID_PARAMETER;
>>>      }
>>>
>>> @@ -701,6 +709,7 @@ CoreFreePoolI (
>>>                    IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
>>>      HasPoolTail = !(IsGuarded &&
>>>                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
>>> +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>>>
>>>      if (HasPoolTail) {
>>>        Tail = HEAD_TO_TAIL (Head);
>>> @@ -757,7 +766,7 @@ CoreFreePoolI (
>>>      //
>>>      // If it's not on the list, it must be pool pages
>>>      //
>>> -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
>>> +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
>>>
>>>        //
>>>        // Return the memory pages back to free memory
>>> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>> index 05eb4f422b..f6595c90ed 100644
>>> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
>> KIND, EITHER EXPRESS OR IMPLIED.
>>>    #include <Guid/PropertiesTable.h>
>>>
>>>    #include "DxeMain.h"
>>> +#include "HeapGuard.h"
>>>
>>>    #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
>>>      ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
>>> @@ -205,16 +206,13 @@ MergeMemoryMap (
>>>        NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry,
>> DescriptorSize);
>>>
>>>        do {
>>> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
>>> NumberOfPages));
>>> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
>>> PhysicalStart);
>>> +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry-
>>> NumberOfPages));
>>>          if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
>>> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
>>> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
>>> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
>> NextMemoryMapEntry->PhysicalStart)) {
>>> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
>>> NumberOfPages;
>>> -        if (NewMemoryMapEntry != MemoryMapEntry) {
>>> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
>>> NumberOfPages;
>>> -        }
>>> -
>>> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
>>> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
>> &&
>>> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
>> NextMemoryMapEntry->PhysicalStart)) {
>>> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
>>> NumberOfPages;
>>>            NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
>> (NextMemoryMapEntry, DescriptorSize);
>>>            continue;
>>>          } else {
>>>



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
  2018-10-22  8:23       ` Zeng, Star
@ 2018-10-23  1:24         ` Wang, Jian J
  2018-10-23  3:14           ` Zeng, Star
  0 siblings, 1 reply; 16+ messages in thread
From: Wang, Jian J @ 2018-10-23  1:24 UTC (permalink / raw)
  To: Zeng, Star, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek

Star,

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Monday, October 22, 2018 4:24 PM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
> memory detection
> 
> On 2018/10/22 15:12, Wang, Jian J wrote:
> > Star,
> >
> > Thanks for the comment. Please see my opinions below.
> >
> > Regards,
> > Jian
> >
> >
> >> -----Original Message-----
> >> From: Zeng, Star
> >> Sent: Monday, October 22, 2018 10:53 AM
> >> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> >> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> >> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> >> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> >> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
> >> memory detection
> >>
> >> On 2018/10/19 9:50, Jian J Wang wrote:
> >>> UAF (Use-After-Free) memory detection is new feature introduced to
> >>> detect illegal access to memory which has been freed. The principle
> >>> behind is similar to heap guard feature, that is the core turn all pool
> >>> memory allocation to page allocation and mark them to be not-present
> >>> once they are freed.
> >>>
> >>> This also implies that, once a page is allocated and freed, it cannot
> >>> be re-allocated. This will bring another issue, which is that there's
> >>> risk that memory space will be used out. To address it, the memory
> >>> service add logic to put part (at most 64 pages a time) of freed pages
> >>> back into page pool, so that the memory service can still have memory
> >>> to allocate, when all memory space have been allocated once. This is
> >>> called memory promotion. The promoted pages are always from the eldest
> >>> pages which haven been freed.
> >>>
> >>> To use this feature, one can simply set following PCD to 1
> >>>
> >> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> >>>
> >>> Please note this feature cannot be used with heap guard feature controlled
> >>> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
> >>>
> >>> Cc: Star Zeng <star.zeng@intel.com>
> >>> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> >>> Cc: Jiewen Yao <jiewen.yao@intel.com>
> >>> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> >>> Cc: Laszlo Ersek <lersek@redhat.com>
> >>> Contributed-under: TianoCore Contribution Agreement 1.1
> >>> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> >>
> >> I did not review the whole patch thoroughly. But I suggest that putting
> >> the code for UAF in a separated file if it is feasible. I am aware that
> >> UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
> >> HeapGuard.c and GuardPage.c (this file name is just for example)?
> >>
> >
> > I think we can take the UAF as a special kind of heap guard: guarding the
> > freed heap memory instead of guarding the allocated heap memory. So
> > UAF is a complement of Heap Guard or part of Heap Guard. From this
> > perspective, it's not necessary to put it in separate file.
> 
> Very good point. From this perspective, I agree.
> Then I would like to suggest not introducing a new PCD
> PcdUseAfterFreeDetectionPropertyMask, but use one BIT of
> PcdHeapGuardPropertyMask for it since we think it is part of Heap Guard.
> The benefits are
> 1. No confusion between Heap Guard and Use After Free.
> 2. No need introduce new PCD.
> 3. Can reuse BIT6 - Enable non-stop mode.
> 4. No need update DxeIplPeim to enable IA32 PAE paging.
> 

Good idea. I'll try it. Thanks.

> >
> >>> ---
> >>>    MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
> >>>    MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
> >>>    MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
> >>>    MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
> >> ++++++++++++++++++++++++++-
> >>>    MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
> >>>    MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
> >>>    MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
> >>>    MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
> >>>    8 files changed, 519 insertions(+), 26 deletions(-)
> >>>
> >>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
> >> b/MdeModulePkg/Core/Dxe/DxeMain.h
> >>> index 2dec9da5e3..ae75cc5b25 100644
> >>> --- a/MdeModulePkg/Core/Dxe/DxeMain.h
> >>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
> >>> @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
> ANY
> >> KIND, EITHER EXPRESS OR IMPLIED.
> >>>    #include <Library/DxeServicesLib.h>
> >>>    #include <Library/DebugAgentLib.h>
> >>>    #include <Library/CpuExceptionHandlerLib.h>
> >>> +#include <Library/DebugPrintErrorLevelLib.h>
> >>>
> >>>
> >>>    //
> >>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
> >> b/MdeModulePkg/Core/Dxe/DxeMain.inf
> >>> index 10375443c0..d91258c087 100644
> >>> --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
> >>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
> >>> @@ -198,6 +198,7 @@
> >>>      gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
> ##
> >> CONSUMES
> >>>      gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> >> ## CONSUMES
> >>>      gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ##
> >> CONSUMES
> >>> +
> >> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> >> ## CONSUMES
> >>>
> >>>    # [Hob]
> >>>    # RESOURCE_DESCRIPTOR   ## CONSUMES
> >>> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >>> index d9c65a8045..e43ec74010 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >>> @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
> >>>        EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
> >>>        UINTN                            Index;
> >>>
> >>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> >>> +      return;
> >>> +    }
> >>
> >> I am aware the purpose of this change to optimize some code when
> >> DEBUG_GCD is not set.
> >> But I do not suggest we newly introduce the dependency to
> >> DebugPrintErrorLevelLib, I think this check should be hidden in the
> >> instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL
> (this
> >> macro name is just for example) can be introduced if we really want to
> >> do that.
> >>
> >
> > There're two purpose here, one is for optimization but another is to fix a true
> > issue here: GCD memory lock reentrance (see below for details)
> >
> >     CoreDumpGcdMemorySpaceMap()
> > => CoreGetMemorySpaceMap()
> > => CoreAcquireGcdMemoryLock () *
> >     AllocatePool()
> > => InternalAllocatePool()
> > => CoreAllocatePool()
> > => CoreAllocatePoolI()
> > => CoreAllocatePoolPagesI()
> > => CoreAllocatePoolPages()
> > => FindFreePages()
> > => PromoteMemoryResource()
> > => CoreAcquireGcdMemoryLock() **
> >
> > If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix
> above issue
> > in a different way. For example, change CoreGetMemorySpaceMap() to avoid
> lock
> > conflict, if possible. In addition, we could set PcdFixedDebugPrintErrorLevel to
> control
> > the DEBUG_GCD print.
> 
> Got it, thanks.
> I still not suggest using GetDebugPrintErrorLevel with new
> DebugPrintErrorLevelLib dependency.
> Seemingly, the problem even could not be resolved thoroughly because it
> may still happen when DEBUG_GCD is set, right?
> 
> A possible solution is adding CoreAcquireGcdMemoryLockOrFail, updating
> PromoteMemoryResource to use CoreAcquireGcdMemoryLockOrFail, and
> updating the ASSERT in CoreDumpGcdMemorySpaceMap with error check.
> Maybe there is other good idea.
> 

The actual issue is that, In the code of CoreGetMemorySpaceMap(), AllocatePool()
is inside the GCD lock scope. Maybe we could move it out of the lock. Do you
know if AllocatePool will change GCD memory map? I can't see it from code.
But maybe I miss something.

  CoreAcquireGcdMemoryLock ();
  ...
  ...
  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
  ...
  ...
Done:
  CoreReleaseGcdMemoryLock ();

=======>

  CoreAcquireGcdMemoryLock ();
  ...
  CoreReleaseGcdMemoryLock ();
  ...
  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
  ...
  CoreAcquireGcdMemoryLock ();
  ...
Done:
  CoreReleaseGcdMemoryLock ();

> 
> Thanks,
> Star
> 
> >
> >>
> >> Thanks,
> >> Star
> >>
> >>> +
> >>>        Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
> >> &MemorySpaceMap);
> >>>        ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
> >>>
> >>> @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
> >>>        EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
> >>>        UINTN                        Index;
> >>>
> >>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> >>> +      return;
> >>> +    }
> >>> +
> >>>        Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
> >>>        ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
> >>>
> >>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> >> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> >>> index 663f969c0d..b120c04f8f 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> >>> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> >> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >>>    GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> >> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >>>                                        = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> >>>
> >>> +//
> >>> +// Used for Use-After-Free memory detection to promote freed but not
> used
> >> pages.
> >>> +//
> >>> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
> >> mLastPromotedPage = BASE_4GB;
> >>> +
> >>>    /**
> >>>      Set corresponding bits in bitmap table to 1 according to the address.
> >>>
> >>> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
> >>>
> >>>      @return An integer containing the guarded memory bitmap.
> >>>    **/
> >>> -UINTN
> >>> +UINT64
> >>>    GetGuardedMemoryBits (
> >>>      IN EFI_PHYSICAL_ADDRESS    Address,
> >>>      IN UINTN                   NumberOfPages
> >>> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
> >>>    {
> >>>      UINT64            *BitMap;
> >>>      UINTN             Bits;
> >>> -  UINTN             Result;
> >>> +  UINT64            Result;
> >>>      UINTN             Shift;
> >>>      UINTN             BitsToUnitEnd;
> >>>
> >>> @@ -1203,6 +1208,373 @@ SetAllGuardPages (
> >>>      }
> >>>    }
> >>>
> >>> +/**
> >>> +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
> >>> +
> >>> +  @return TRUE/FALSE.
> >>> +**/
> >>> +BOOLEAN
> >>> +IsUafEnabled (
> >>> +  VOID
> >>> +  )
> >>> +{
> >>> +  ASSERT (!IsHeapGuardEnabled());
> >>> +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
> >>> +}
> >>> +
> >>> +/**
> >>> +  Find the address of top-most guarded free page.
> >>> +
> >>> +  @param[out]  Address    Start address of top-most guarded free page.
> >>> +
> >>> +  @return VOID.
> >>> +**/
> >>> +VOID
> >>> +GetLastGuardedFreePageAddress (
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *Address
> >>> +  )
> >>> +{
> >>> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> >>> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> >>> +  UINTN                   Level;
> >>> +  UINT64                  Map;
> >>> +  INTN                    Index;
> >>> +
> >>> +  ASSERT (mMapLevel >= 1);
> >>> +
> >>> +  BaseAddress = 0;
> >>> +  Map = mGuardedMemoryMap;
> >>> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> >>> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> >>> +       ++Level) {
> >>> +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> >>> +
> >>> +    //
> >>> +    // Find the non-NULL entry at largest index.
> >>> +    //
> >>> +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> >>> +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> >>> +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> >>> +        Map = ((UINT64 *)(UINTN)Map)[Index];
> >>> +        break;
> >>> +      }
> >>> +    }
> >>> +  }
> >>> +
> >>> +  //
> >>> +  // Find the non-zero MSB then get the page address.
> >>> +  //
> >>> +  while (Map != 0) {
> >>> +    Map = RShiftU64 (Map, 1);
> >>> +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> >>> +  }
> >>> +
> >>> +  *Address = BaseAddress;
> >>> +}
> >>> +
> >>> +/**
> >>> +  Record freed pages.
> >>> +
> >>> +  @param[in]  BaseAddress   Base address of just freed pages.
> >>> +  @param[in]  Pages         Number of freed pages.
> >>> +
> >>> +  @return VOID.
> >>> +**/
> >>> +VOID
> >>> +MarkFreedPages (
> >>> +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> >>> +  IN UINTN                    Pages
> >>> +  )
> >>> +{
> >>> +  SetGuardedMemoryBits (BaseAddress, Pages);
> >>> +}
> >>> +
> >>> +/**
> >>> +  Record freed pages as well as mark them as not-present, if possible.
> >>> +
> >>> +  @param[in]  BaseAddress   Base address of just freed pages.
> >>> +  @param[in]  Pages         Number of freed pages.
> >>> +
> >>> +  @return VOID.
> >>> +**/
> >>> +VOID
> >>> +EFIAPI
> >>> +GuardFreedPages (
> >>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> >>> +  IN  UINTN                   Pages
> >>> +  )
> >>> +{
> >>> +  EFI_STATUS      Status;
> >>> +
> >>> +  //
> >>> +  // Do nothing if
> >>> +  // - Use-After-Free detection is not enabled
> >>> +  // - Freed memory is legacy memory lower than 1MB
> >>> +  //
> >>> +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
> >>> +    return;
> >>> +  }
> >>> +
> >>> +  MarkFreedPages (BaseAddress, Pages);
> >>> +  if (gCpu != NULL) {
> >>> +    //
> >>> +    // Set flag to make sure allocating memory without GUARD for page
> table
> >>> +    // operation; otherwise infinite loops could be caused.
> >>> +    //
> >>> +    mOnGuarding = TRUE;
> >>> +    //
> >>> +    // Note: This might overwrite other attributes needed by other features,
> >>> +    // such as NX memory protection.
> >>> +    //
> >>> +    Status = gCpu->SetMemoryAttributes (
> >>> +                     gCpu,
> >>> +                     BaseAddress,
> >>> +                     EFI_PAGES_TO_SIZE(Pages),
> >>> +                     EFI_MEMORY_RP
> >>> +                     );
> >>> +    //
> >>> +    // Normally we should check the returned Status. But there might be
> >> memory
> >>> +    // alloc/free involved in SetMemoryAttributes(), which would fail this
> >>> +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> >>> +    //
> >>> +    if (EFI_ERROR (Status)) {
> >>> +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n",
> >> BaseAddress, Pages));
> >>> +    }
> >>> +    mOnGuarding = FALSE;
> >>> +  }
> >>> +}
> >>> +
> >>> +/**
> >>> +  Mark all pages freed before CPU Arch Protocol as not-present.
> >>> +
> >>> +**/
> >>> +VOID
> >>> +GuardAllFreedPages (
> >>> +  VOID
> >>> +  )
> >>> +{
> >>> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINT64    TableEntry;
> >>> +  UINT64    Address;
> >>> +  UINT64    GuardPage;
> >>> +  INTN      Level;
> >>> +  UINTN     BitIndex;
> >>> +  UINTN     GuardPageNumber;
> >>> +
> >>> +  if (mGuardedMemoryMap == 0 ||
> >>> +      mMapLevel == 0 ||
> >>> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> >>> +    return;
> >>> +  }
> >>> +
> >>> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> >>> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> >>> +
> >>> +  SetMem (Tables, sizeof(Tables), 0);
> >>> +  SetMem (Addresses, sizeof(Addresses), 0);
> >>> +  SetMem (Indices, sizeof(Indices), 0);
> >>> +
> >>> +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> >>> +  Tables[Level]   = mGuardedMemoryMap;
> >>> +  Address         = 0;
> >>> +  GuardPage       = (UINT64)-1;
> >>> +  GuardPageNumber = 0;
> >>> +
> >>> +  while (TRUE) {
> >>> +    if (Indices[Level] > Entries[Level]) {
> >>> +      Tables[Level] = 0;
> >>> +      Level        -= 1;
> >>> +    } else {
> >>> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> >>> +      Address     = Addresses[Level];
> >>> +
> >>> +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> >>> +        Level            += 1;
> >>> +        Tables[Level]     = TableEntry;
> >>> +        Addresses[Level]  = Address;
> >>> +        Indices[Level]    = 0;
> >>> +
> >>> +        continue;
> >>> +      } else {
> >>> +        BitIndex = 1;
> >>> +        while (BitIndex != 0) {
> >>> +          if ((TableEntry & BitIndex) != 0) {
> >>> +            if (GuardPage == (UINT64)-1) {
> >>> +              GuardPage = Address;
> >>> +            }
> >>> +            ++GuardPageNumber;
> >>> +          } else if (GuardPageNumber > 0) {
> >>> +            GuardFreedPages (GuardPage, GuardPageNumber);
> >>> +            GuardPageNumber = 0;
> >>> +            GuardPage       = (UINT64)-1;
> >>> +          }
> >>> +
> >>> +          if (TableEntry == 0) {
> >>> +            break;
> >>> +          }
> >>> +
> >>> +          Address += EFI_PAGES_TO_SIZE (1);
> >>> +          BitIndex = LShiftU64(BitIndex, 1);
> >>> +        }
> >>> +      }
> >>> +    }
> >>> +
> >>> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> >>> +      break;
> >>> +    }
> >>> +
> >>> +    Indices[Level] += 1;
> >>> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> >>> +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> >>> +
> >>> +  }
> >>> +
> >>> +  //
> >>> +  // Update the maximum address of freed page which can be used for
> >> memory
> >>> +  // promotion upon out-of-memory-space.
> >>> +  //
> >>> +  GetLastGuardedFreePageAddress (&Address);
> >>> +  if (Address != 0) {
> >>> +    mLastPromotedPage = Address;
> >>> +  }
> >>> +}
> >>> +
> >>> +/**
> >>> +  This function checks to see if the given memory map descriptor in a
> memory
> >> map
> >>> +  can be merged with any guarded free pages.
> >>> +
> >>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> >>> +  @param  MaxAddress        Maximum address to stop the merge.
> >>> +
> >>> +  @return VOID
> >>> +
> >>> +**/
> >>> +VOID
> >>> +MergeGuardPages (
> >>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> >>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> >>> +  )
> >>> +{
> >>> +  EFI_PHYSICAL_ADDRESS        EndAddress;
> >>> +  UINT64                      Bitmap;
> >>> +  INTN                        Pages;
> >>> +
> >>> +  if (!IsUafEnabled () ||
> >>> +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> >>> +    return;
> >>> +  }
> >>> +
> >>> +  Bitmap = 0;
> >>> +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
> >>> PhysicalStart);
> >>> +  Pages -= MemoryMapEntry->NumberOfPages;
> >>> +  while (Pages > 0) {
> >>> +    if (Bitmap == 0) {
> >>> +      EndAddress = MemoryMapEntry->PhysicalStart +
> >>> +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> >>> +      Bitmap = GetGuardedMemoryBits(EndAddress,
> >> GUARDED_HEAP_MAP_ENTRY_BITS);
> >>> +    }
> >>> +
> >>> +    if ((Bitmap & 1) == 0) {
> >>> +      break;
> >>> +    }
> >>> +
> >>> +    Pages--;
> >>> +    MemoryMapEntry->NumberOfPages++;
> >>> +    Bitmap = RShiftU64 (Bitmap, 1);
> >>> +  }
> >>> +}
> >>> +
> >>> +/**
> >>> +  Put part (at most 64 pages a time) guarded free pages back to free page
> >> pool.
> >>> +
> >>> +  Use-After-Free detection makes use of 'Used then throw away' way to
> >> detect
> >>> +  any illegal access to freed memory. The thrown-away memory will be
> >> marked as
> >>> +  not-present so that any access to those memory (after free) will trigger
> >>> +  page-fault.
> >>> +
> >>> +  The problem is that this will consume lots of memory space. Once no
> >> memory
> >>> +  left in pool to allocate, we have to restore part of the freed pages to their
> >>> +  normal function. Otherwise the whole system will stop functioning.
> >>> +
> >>> +  @param  StartAddress    Start address of promoted memory.
> >>> +  @param  EndAddress      End address of promoted memory.
> >>> +
> >>> +  @return TRUE    Succeeded to promote memory.
> >>> +  @return FALSE   No free memory found.
> >>> +
> >>> +**/
> >>> +BOOLEAN
> >>> +PromoteGuardedFreePages (
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> >>> +  )
> >>> +{
> >>> +  EFI_STATUS              Status;
> >>> +  UINTN                   AvailablePages;
> >>> +  UINT64                  Bitmap;
> >>> +  EFI_PHYSICAL_ADDRESS    Start;
> >>> +
> >>> +  if (!IsUafEnabled ()) {
> >>> +    return FALSE;
> >>> +  }
> >>> +
> >>> +  //
> >>> +  // Similar to memory allocation service, always search the freed pages in
> >>> +  // descending direction.
> >>> +  //
> >>> +  Start           = mLastPromotedPage;
> >>> +  AvailablePages  = 0;
> >>> +  while (AvailablePages == 0) {
> >>> +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> >>> +    //
> >>> +    // If the address wraps around, try the really freed pages at top.
> >>> +    //
> >>> +    if (Start > mLastPromotedPage) {
> >>> +      GetLastGuardedFreePageAddress (&Start);
> >>> +      ASSERT (Start != 0);
> >>> +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> >>> +    }
> >>> +
> >>> +    Bitmap = GetGuardedMemoryBits(Start,
> >> GUARDED_HEAP_MAP_ENTRY_BITS);
> >>> +    while (Bitmap > 0) {
> >>> +      if ((Bitmap & 1) != 0) {
> >>> +        ++AvailablePages;
> >>> +      } else if (AvailablePages == 0) {
> >>> +        Start += EFI_PAGES_TO_SIZE (1);
> >>> +      } else {
> >>> +        break;
> >>> +      }
> >>> +
> >>> +      Bitmap = RShiftU64(Bitmap, 1);
> >>> +    }
> >>> +  }
> >>> +
> >>> +  if (AvailablePages) {
> >>> +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start,
> >> AvailablePages));
> >>> +    if (gCpu != NULL) {
> >>> +      //
> >>> +      // Set flag to make sure allocating memory without GUARD for page
> table
> >>> +      // operation; otherwise infinite loops could be caused.
> >>> +      //
> >>> +      mOnGuarding = TRUE;
> >>> +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
> >> EFI_PAGES_TO_SIZE(AvailablePages), 0);
> >>> +      ASSERT_EFI_ERROR (Status);
> >>> +      mOnGuarding = FALSE;
> >>> +    }
> >>> +
> >>> +    mLastPromotedPage = Start;
> >>> +    *StartAddress     = Start;
> >>> +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> >>> +    return TRUE;
> >>> +  }
> >>> +
> >>> +  return FALSE;
> >>> +}
> >>> +
> >>>    /**
> >>>      Notify function used to set all Guard pages before CPU Arch Protocol
> >> installed.
> >>>    **/
> >>> @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
> >>>      )
> >>>    {
> >>>      ASSERT (gCpu != NULL);
> >>> -  SetAllGuardPages ();
> >>> +
> >>> +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
> >>> +    SetAllGuardPages ();
> >>> +  }
> >>> +
> >>> +  if (IsUafEnabled ()) {
> >>> +    GuardAllFreedPages ();
> >>> +  }
> >>>    }
> >>>
> >>>    /**
> >>> @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
> >>>      CHAR8     *Ruler1;
> >>>      CHAR8     *Ruler2;
> >>>
> >>> +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
> >>> +    return;
> >>> +  }
> >>> +
> >>> +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
> >>> +    return;
> >>> +  }
> >>> +
> >>>      if (mGuardedMemoryMap == 0 ||
> >>>          mMapLevel == 0 ||
> >>>          mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> >>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> >> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> >>> index 8c34692439..3f2a5283c2 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> >>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> >>> @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
> >>>      VOID
> >>>      );
> >>>
> >>> +/**
> >>> +  This function checks to see if the given memory map descriptor in a
> memory
> >> map
> >>> +  can be merged with any guarded free pages.
> >>> +
> >>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> >>> +  @param  MaxAddress        Maximum address to stop the merge.
> >>> +
> >>> +  @return VOID
> >>> +
> >>> +**/
> >>> +VOID
> >>> +MergeGuardPages (
> >>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> >>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> >>> +  );
> >>> +
> >>> +/**
> >>> +  Record freed pages as well as mark them as not-present, if possible.
> >>> +
> >>> +  @param[in]  BaseAddress   Base address of just freed pages.
> >>> +  @param[in]  Pages         Number of freed pages.
> >>> +
> >>> +  @return VOID.
> >>> +**/
> >>> +VOID
> >>> +EFIAPI
> >>> +GuardFreedPages (
> >>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> >>> +  IN  UINTN                   Pages
> >>> +  );
> >>> +
> >>> +/**
> >>> +  Check to see if the use-after-free (UAF) feature is enabled or not.
> >>> +
> >>> +  @return TRUE/FALSE.
> >>> +**/
> >>> +BOOLEAN
> >>> +IsUafEnabled (
> >>> +  VOID
> >>> +  );
> >>> +
> >>> +/**
> >>> +  Put part (at most 64 pages a time) guarded free pages back to free page
> >> pool.
> >>> +
> >>> +  Use-After-Free detection makes use of 'Used then throw away' way to
> >> detect
> >>> +  any illegal access to freed memory. The thrown-away memory will be
> >> marked as
> >>> +  not-present so that any access to those memory (after free) will trigger
> >>> +  page-fault.
> >>> +
> >>> +  The problem is that this will consume lots of memory space. Once no
> >> memory
> >>> +  left in pool to allocate, we have to restore part of the freed pages to their
> >>> +  normal function. Otherwise the whole system will stop functioning.
> >>> +
> >>> +  @param  StartAddress    Start address of promoted memory.
> >>> +  @param  EndAddress      End address of promoted memory.
> >>> +
> >>> +  @return TRUE    Succeeded to promote memory.
> >>> +  @return FALSE   No free memory found.
> >>> +
> >>> +**/
> >>> +BOOLEAN
> >>> +PromoteGuardedFreePages (
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> >>> +  );
> >>> +
> >>>    extern BOOLEAN mOnGuarding;
> >>>
> >>>    #endif
> >>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
> >> b/MdeModulePkg/Core/Dxe/Mem/Page.c
> >>> index 3b4cc08e7c..9f4419fa5b 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> >>> @@ -401,9 +401,11 @@ PromoteMemoryResource (
> >>>      VOID
> >>>      )
> >>>    {
> >>> -  LIST_ENTRY         *Link;
> >>> -  EFI_GCD_MAP_ENTRY  *Entry;
> >>> -  BOOLEAN            Promoted;
> >>> +  LIST_ENTRY            *Link;
> >>> +  EFI_GCD_MAP_ENTRY     *Entry;
> >>> +  BOOLEAN               Promoted;
> >>> +  EFI_PHYSICAL_ADDRESS  StartAddress;
> >>> +  EFI_PHYSICAL_ADDRESS  EndAddress;
> >>>
> >>>      DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
> >>>
> >>> @@ -451,6 +453,24 @@ PromoteMemoryResource (
> >>>
> >>>      CoreReleaseGcdMemoryLock ();
> >>>
> >>> +  if (!Promoted) {
> >>> +    //
> >>> +    // If Use-After-Free detection is enabled, we might promote pages from
> >>> +    // guarded free pages.
> >>> +    //
> >>> +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> >>> +    if (Promoted) {
> >>> +      CoreAcquireGcdMemoryLock ();
> >>> +      CoreAddRange (
> >>> +        EfiConventionalMemory,
> >>> +        StartAddress,
> >>> +        EndAddress,
> >>> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> >> EFI_MEMORY_WB
> >>> +        );
> >>> +      CoreReleaseGcdMemoryLock ();
> >>> +    }
> >>> +  }
> >>> +
> >>>      return Promoted;
> >>>    }
> >>>    /**
> >>> @@ -896,9 +916,13 @@ CoreConvertPagesEx (
> >>>        }
> >>>
> >>>        //
> >>> -    // Add our new range in
> >>> +    // Add our new range in. Don't do this for freed pages if Use-After-Free
> >>> +    // detection is enabled.
> >>>        //
> >>> -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> >>> +    if (!IsUafEnabled () || !ChangingType || MemType !=
> >> EfiConventionalMemory) {
> >>> +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> >>> +    }
> >>> +
> >>>        if (ChangingType && (MemType == EfiConventionalMemory)) {
> >>>          //
> >>>          // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
> >> this
> >>> @@ -1514,6 +1538,7 @@ CoreFreePages (
> >>>
> >>>      Status = CoreInternalFreePages (Memory, NumberOfPages,
> &MemoryType);
> >>>      if (!EFI_ERROR (Status)) {
> >>> +    GuardFreedPages (Memory, NumberOfPages);
> >>>        CoreUpdateProfile (
> >>>          (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> >>>          MemoryProfileActionFreePages,
> >>> @@ -1908,9 +1933,7 @@ Done:
> >>>      *MemoryMapSize = BufferSize;
> >>>
> >>>      DEBUG_CODE (
> >>> -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> >>> -      DumpGuardedMemoryBitmap ();
> >>> -    }
> >>> +    DumpGuardedMemoryBitmap ();
> >>>      );
> >>>
> >>>      return Status;
> >>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> >> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> >>> index 1ff2061f7f..0cd74267b3 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> >>> @@ -26,7 +26,8 @@ typedef struct {
> >>>    } POOL_FREE;
> >>>
> >>>
> >>> -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> >>> +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> >>> +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
> >>>    typedef struct {
> >>>      UINT32          Signature;
> >>>      UINT32          Reserved;
> >>> @@ -367,6 +368,7 @@ CoreAllocatePoolI (
> >>>      UINTN       NoPages;
> >>>      UINTN       Granularity;
> >>>      BOOLEAN     HasPoolTail;
> >>> +  BOOLEAN     PageAsPool;
> >>>
> >>>      ASSERT_LOCKED (&mPoolMemoryLock);
> >>>
> >>> @@ -386,6 +388,7 @@ CoreAllocatePoolI (
> >>>
> >>>      HasPoolTail  = !(NeedGuard &&
> >>>                       ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> >>> +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
> >>>
> >>>      //
> >>>      // Adjusting the Size to be of proper alignment so that
> >>> @@ -406,7 +409,7 @@ CoreAllocatePoolI (
> >>>      // If allocation is over max size, just allocate pages for the request
> >>>      // (slow)
> >>>      //
> >>> -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> >>> +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
> >>>        if (!HasPoolTail) {
> >>>          Size -= sizeof (POOL_TAIL);
> >>>        }
> >>> @@ -498,7 +501,7 @@ Done:
> >>>        //
> >>>        // If we have a pool buffer, fill in the header & tail info
> >>>        //
> >>> -    Head->Signature = POOL_HEAD_SIGNATURE;
> >>> +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
> >> POOL_HEAD_SIGNATURE;
> >>>        Head->Size      = Size;
> >>>        Head->Type      = (EFI_MEMORY_TYPE) PoolType;
> >>>        Buffer          = Head->Data;
> >>> @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
> >>>      CoreFreePoolPages (Memory, NoPages);
> >>>      CoreReleaseMemoryLock ();
> >>>
> >>> +  GuardFreedPages (Memory, NoPages);
> >>>      ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
> >>>        (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE
> (NoPages));
> >>>    }
> >>> @@ -685,15 +689,19 @@ CoreFreePoolI (
> >>>      UINTN       Granularity;
> >>>      BOOLEAN     IsGuarded;
> >>>      BOOLEAN     HasPoolTail;
> >>> +  BOOLEAN     PageAsPool;
> >>>
> >>>      ASSERT(Buffer != NULL);
> >>>      //
> >>>      // Get the head & tail of the pool entry
> >>>      //
> >>> -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> >>> +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
> >>>      ASSERT(Head != NULL);
> >>>
> >>> -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> >>> +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> >>> +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> >>> +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> >>> +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >>>        return EFI_INVALID_PARAMETER;
> >>>      }
> >>>
> >>> @@ -701,6 +709,7 @@ CoreFreePoolI (
> >>>                    IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
> >>>      HasPoolTail = !(IsGuarded &&
> >>>                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> >>> +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >>>
> >>>      if (HasPoolTail) {
> >>>        Tail = HEAD_TO_TAIL (Head);
> >>> @@ -757,7 +766,7 @@ CoreFreePoolI (
> >>>      //
> >>>      // If it's not on the list, it must be pool pages
> >>>      //
> >>> -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> >>> +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
> >>>
> >>>        //
> >>>        // Return the memory pages back to free memory
> >>> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> >> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> >>> index 05eb4f422b..f6595c90ed 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> >>> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
> ANY
> >> KIND, EITHER EXPRESS OR IMPLIED.
> >>>    #include <Guid/PropertiesTable.h>
> >>>
> >>>    #include "DxeMain.h"
> >>> +#include "HeapGuard.h"
> >>>
> >>>    #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
> >>>      ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> >>> @@ -205,16 +206,13 @@ MergeMemoryMap (
> >>>        NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> (MemoryMapEntry,
> >> DescriptorSize);
> >>>
> >>>        do {
> >>> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
> >>> NumberOfPages));
> >>> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
> >>> PhysicalStart);
> >>> +      MemoryBlockLength = (UINT64) (EfiPagesToSize
> (NewMemoryMapEntry-
> >>> NumberOfPages));
> >>>          if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> >>> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> >>> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
> &&
> >>> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> >> NextMemoryMapEntry->PhysicalStart)) {
> >>> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >>> NumberOfPages;
> >>> -        if (NewMemoryMapEntry != MemoryMapEntry) {
> >>> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >>> NumberOfPages;
> >>> -        }
> >>> -
> >>> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> >>> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry-
> >Attribute)
> >> &&
> >>> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> >> NextMemoryMapEntry->PhysicalStart)) {
> >>> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >>> NumberOfPages;
> >>>            NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> >> (NextMemoryMapEntry, DescriptorSize);
> >>>            continue;
> >>>          } else {
> >>>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
  2018-10-23  1:24         ` Wang, Jian J
@ 2018-10-23  3:14           ` Zeng, Star
  0 siblings, 0 replies; 16+ messages in thread
From: Zeng, Star @ 2018-10-23  3:14 UTC (permalink / raw)
  To: Wang, Jian J, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek,
	star.zeng

On 2018/10/23 9:24, Wang, Jian J wrote:
> Star,
> 
> Regards,
> Jian
> 
> 
>> -----Original Message-----
>> From: Zeng, Star
>> Sent: Monday, October 22, 2018 4:24 PM
>> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
>> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
>> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>> memory detection
>>
>> On 2018/10/22 15:12, Wang, Jian J wrote:
>>> Star,
>>>
>>> Thanks for the comment. Please see my opinions below.
>>>
>>> Regards,
>>> Jian
>>>
>>>
>>>> -----Original Message-----
>>>> From: Zeng, Star
>>>> Sent: Monday, October 22, 2018 10:53 AM
>>>> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
>>>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>>>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
>>>> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
>>>> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>>>> memory detection
>>>>
>>>> On 2018/10/19 9:50, Jian J Wang wrote:
>>>>> UAF (Use-After-Free) memory detection is new feature introduced to
>>>>> detect illegal access to memory which has been freed. The principle
>>>>> behind is similar to heap guard feature, that is the core turn all pool
>>>>> memory allocation to page allocation and mark them to be not-present
>>>>> once they are freed.
>>>>>
>>>>> This also implies that, once a page is allocated and freed, it cannot
>>>>> be re-allocated. This will bring another issue, which is that there's
>>>>> risk that memory space will be used out. To address it, the memory
>>>>> service add logic to put part (at most 64 pages a time) of freed pages
>>>>> back into page pool, so that the memory service can still have memory
>>>>> to allocate, when all memory space have been allocated once. This is
>>>>> called memory promotion. The promoted pages are always from the eldest
>>>>> pages which haven been freed.
>>>>>
>>>>> To use this feature, one can simply set following PCD to 1
>>>>>
>>>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>>>>>
>>>>> Please note this feature cannot be used with heap guard feature controlled
>>>>> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
>>>>>
>>>>> Cc: Star Zeng <star.zeng@intel.com>
>>>>> Cc: Michael D Kinney <michael.d.kinney@intel.com>
>>>>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>>>>> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
>>>>> Cc: Laszlo Ersek <lersek@redhat.com>
>>>>> Contributed-under: TianoCore Contribution Agreement 1.1
>>>>> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
>>>>
>>>> I did not review the whole patch thoroughly. But I suggest that putting
>>>> the code for UAF in a separated file if it is feasible. I am aware that
>>>> UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
>>>> HeapGuard.c and GuardPage.c (this file name is just for example)?
>>>>
>>>
>>> I think we can take the UAF as a special kind of heap guard: guarding the
>>> freed heap memory instead of guarding the allocated heap memory. So
>>> UAF is a complement of Heap Guard or part of Heap Guard. From this
>>> perspective, it's not necessary to put it in separate file.
>>
>> Very good point. From this perspective, I agree.
>> Then I would like to suggest not introducing a new PCD
>> PcdUseAfterFreeDetectionPropertyMask, but use one BIT of
>> PcdHeapGuardPropertyMask for it since we think it is part of Heap Guard.
>> The benefits are
>> 1. No confusion between Heap Guard and Use After Free.
>> 2. No need introduce new PCD.
>> 3. Can reuse BIT6 - Enable non-stop mode.
>> 4. No need update DxeIplPeim to enable IA32 PAE paging.
>>
> 
> Good idea. I'll try it. Thanks.
> 
>>>
>>>>> ---
>>>>>     MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
>>>>>     MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
>>>>>     MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
>>>>>     MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
>>>> ++++++++++++++++++++++++++-
>>>>>     MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
>>>>>     MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
>>>>>     MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
>>>>>     MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
>>>>>     8 files changed, 519 insertions(+), 26 deletions(-)
>>>>>
>>>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
>>>> b/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> index 2dec9da5e3..ae75cc5b25 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
>> ANY
>>>> KIND, EITHER EXPRESS OR IMPLIED.
>>>>>     #include <Library/DxeServicesLib.h>
>>>>>     #include <Library/DebugAgentLib.h>
>>>>>     #include <Library/CpuExceptionHandlerLib.h>
>>>>> +#include <Library/DebugPrintErrorLevelLib.h>
>>>>>
>>>>>
>>>>>     //
>>>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>> b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> index 10375443c0..d91258c087 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> @@ -198,6 +198,7 @@
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
>> ##
>>>> CONSUMES
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
>>>> ## CONSUMES
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ##
>>>> CONSUMES
>>>>> +
>>>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>>>> ## CONSUMES
>>>>>
>>>>>     # [Hob]
>>>>>     # RESOURCE_DESCRIPTOR   ## CONSUMES
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> index d9c65a8045..e43ec74010 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
>>>>>         EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
>>>>>         UINTN                            Index;
>>>>>
>>>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>>>> +      return;
>>>>> +    }
>>>>
>>>> I am aware the purpose of this change to optimize some code when
>>>> DEBUG_GCD is not set.
>>>> But I do not suggest we newly introduce the dependency to
>>>> DebugPrintErrorLevelLib, I think this check should be hidden in the
>>>> instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL
>> (this
>>>> macro name is just for example) can be introduced if we really want to
>>>> do that.
>>>>
>>>
>>> There're two purpose here, one is for optimization but another is to fix a true
>>> issue here: GCD memory lock reentrance (see below for details)
>>>
>>>      CoreDumpGcdMemorySpaceMap()
>>> => CoreGetMemorySpaceMap()
>>> => CoreAcquireGcdMemoryLock () *
>>>      AllocatePool()
>>> => InternalAllocatePool()
>>> => CoreAllocatePool()
>>> => CoreAllocatePoolI()
>>> => CoreAllocatePoolPagesI()
>>> => CoreAllocatePoolPages()
>>> => FindFreePages()
>>> => PromoteMemoryResource()
>>> => CoreAcquireGcdMemoryLock() **
>>>
>>> If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix
>> above issue
>>> in a different way. For example, change CoreGetMemorySpaceMap() to avoid
>> lock
>>> conflict, if possible. In addition, we could set PcdFixedDebugPrintErrorLevel to
>> control
>>> the DEBUG_GCD print.
>>
>> Got it, thanks.
>> I still not suggest using GetDebugPrintErrorLevel with new
>> DebugPrintErrorLevelLib dependency.
>> Seemingly, the problem even could not be resolved thoroughly because it
>> may still happen when DEBUG_GCD is set, right?
>>
>> A possible solution is adding CoreAcquireGcdMemoryLockOrFail, updating
>> PromoteMemoryResource to use CoreAcquireGcdMemoryLockOrFail, and
>> updating the ASSERT in CoreDumpGcdMemorySpaceMap with error check.
>> Maybe there is other good idea.
>>
> 
> The actual issue is that, In the code of CoreGetMemorySpaceMap(), AllocatePool()
> is inside the GCD lock scope. Maybe we could move it out of the lock. Do you
> know if AllocatePool will change GCD memory map? I can't see it from code.
> But maybe I miss something.
> 
>    CoreAcquireGcdMemoryLock ();
>    ...
>    ...
>    *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>    ...
>    ...
> Done:
>    CoreReleaseGcdMemoryLock ();
> 
> =======>
> 
>    CoreAcquireGcdMemoryLock ();
>    ...
>    CoreReleaseGcdMemoryLock ();
>    ...
>    *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>    ...
>    CoreAcquireGcdMemoryLock ();
>    ...
> Done:
>    CoreReleaseGcdMemoryLock ();
> 
>>
>> Thanks,
>> Star
>>
>>>
>>>>
>>>> Thanks,
>>>> Star
>>>>
>>>>> +
>>>>>         Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
>>>> &MemorySpaceMap);
>>>>>         ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
>>>>>
>>>>> @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
>>>>>         EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
>>>>>         UINTN                        Index;
>>>>>
>>>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>>>> +      return;
>>>>> +    }
>>>>> +
>>>>>         Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
>>>>>         ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
>>>>>
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> index 663f969c0d..b120c04f8f 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>>>> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>>>     GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>>>> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>>>                                         = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>>>>>
>>>>> +//
>>>>> +// Used for Use-After-Free memory detection to promote freed but not
>> used
>>>> pages.
>>>>> +//
>>>>> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
>>>> mLastPromotedPage = BASE_4GB;
>>>>> +
>>>>>     /**
>>>>>       Set corresponding bits in bitmap table to 1 according to the address.
>>>>>
>>>>> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
>>>>>
>>>>>       @return An integer containing the guarded memory bitmap.
>>>>>     **/
>>>>> -UINTN
>>>>> +UINT64
>>>>>     GetGuardedMemoryBits (
>>>>>       IN EFI_PHYSICAL_ADDRESS    Address,
>>>>>       IN UINTN                   NumberOfPages
>>>>> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
>>>>>     {
>>>>>       UINT64            *BitMap;
>>>>>       UINTN             Bits;
>>>>> -  UINTN             Result;
>>>>> +  UINT64            Result;
>>>>>       UINTN             Shift;
>>>>>       UINTN             BitsToUnitEnd;
>>>>>
>>>>> @@ -1203,6 +1208,373 @@ SetAllGuardPages (
>>>>>       }
>>>>>     }
>>>>>
>>>>> +/**
>>>>> +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
>>>>> +
>>>>> +  @return TRUE/FALSE.
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +IsUafEnabled (
>>>>> +  VOID
>>>>> +  )
>>>>> +{
>>>>> +  ASSERT (!IsHeapGuardEnabled());
>>>>> +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Find the address of top-most guarded free page.
>>>>> +
>>>>> +  @param[out]  Address    Start address of top-most guarded free page.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +GetLastGuardedFreePageAddress (
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *Address
>>>>> +  )
>>>>> +{
>>>>> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
>>>>> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
>>>>> +  UINTN                   Level;
>>>>> +  UINT64                  Map;
>>>>> +  INTN                    Index;
>>>>> +
>>>>> +  ASSERT (mMapLevel >= 1);
>>>>> +
>>>>> +  BaseAddress = 0;
>>>>> +  Map = mGuardedMemoryMap;
>>>>> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
>>>>> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
>>>>> +       ++Level) {
> Path: news.gmane.org!.POSTED!not-for-mail
> From: "Wang, Jian J" <jian.j.wang@intel.com>
> Newsgroups: gmane.comp.bios.edk2.devel
> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory
>   detection
> Date: Tue, 23 Oct 2018 01:24:59 +0000
> Lines: 1030
> Approved: news@gmane.org
> Message-ID: <D827630B58408649ACB04F44C510003624E86337@SHSMSX103.ccr.corp.intel.com>
> References: <20181019015013.7488-1-jian.j.wang@intel.com>
>   <20181019015013.7488-4-jian.j.wang@intel.com>
>   <e1b4631e-8130-b5e5-8ec7-f9b2fdf3a866@intel.com>
>   <D827630B58408649ACB04F44C510003624E85BC9@SHSMSX103.ccr.corp.intel.com>
>   <7f42fbf3-1cd8-6ce9-1257-731cab01a07c@intel.com>
> NNTP-Posting-Host: blaine.gmane.org
> Mime-Version: 1.0
> Content-Type: text/plain; charset="us-ascii"
> Content-Transfer-Encoding: 7bit
> X-Trace: blaine.gmane.org 1540257783 16198 195.159.176.226 (23 Oct 2018 01:23:03 GMT)
> X-Complaints-To: usenet@blaine.gmane.org
> NNTP-Posting-Date: Tue, 23 Oct 2018 01:23:03 +0000 (UTC)
> Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>, "Ni,
>   Ruiyu" <ruiyu.ni@intel.com>, "Yao, Jiewen" <jiewen.yao@intel.com>,
>   Laszlo Ersek <lersek@redhat.com>
> To: "Zeng, Star" <star.zeng@intel.com>, "edk2-devel@lists.01.org"
>   <edk2-devel@lists.01.org>
> Original-X-From: edk2-devel-bounces@lists.01.org Tue Oct 23 03:22:58 2018
> Return-path: <edk2-devel-bounces@lists.01.org>
> Envelope-to: gcbed-edk2@m.gmane.org
> Original-Received: from ml01.01.org ([198.145.21.10])
> 	by blaine.gmane.org with esmtp (Exim 4.84_2)
> 	(envelope-from <edk2-devel-bounces@lists.01.org>)
> 	id 1gElPL-0003zZ-6O
> 	for gcbed-edk2@m.gmane.org; Tue, 23 Oct 2018 03:22:55 +0200
> Original-Received: from [127.0.0.1] (localhost [IPv6:::1])
> 	by ml01.01.org (Postfix) with ESMTP id 7535B2117D267;
> 	Mon, 22 Oct 2018 18:25:05 -0700 (PDT)
> X-Original-To: edk2-devel@lists.01.org
> Delivered-To: edk2-devel@lists.01.org
> Received-SPF: Pass (sender SPF authorized) identity=mailfrom;
>   client-ip=192.55.52.115; helo=mga14.intel.com;
>   envelope-from=jian.j.wang@intel.com; receiver=edk2-devel@lists.01.org
> Original-Received: from mga14.intel.com (mga14.intel.com [192.55.52.115])
>   (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
>   (No client certificate requested)
>   by ml01.01.org (Postfix) with ESMTPS id A1D2721962301
>   for <edk2-devel@lists.01.org>; Mon, 22 Oct 2018 18:25:04 -0700 (PDT)
> X-Amp-Result: SKIPPED(no attachment in message)
> X-Amp-File-Uploaded: False
> Original-Received: from orsmga006.jf.intel.com ([10.7.209.51])
>   by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
>   22 Oct 2018 18:25:03 -0700
> X-ExtLoop1: 1
> X-IronPort-AV: E=Sophos;i="5.54,414,1534834800"; d="scan'208";a="84756666"
> Original-Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204])
>   by orsmga006.jf.intel.com with ESMTP; 22 Oct 2018 18:25:03 -0700
> Original-Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by
>   FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS)
>   id 14.3.319.2; Mon, 22 Oct 2018 18:25:02 -0700
> Original-Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
>   fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server (TLS)
>   id 14.3.319.2; Mon, 22 Oct 2018 18:25:02 -0700
> Original-Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.224]) by
>   SHSMSX101.ccr.corp.intel.com ([169.254.1.46]) with mapi id 14.03.0319.002;
>   Tue, 23 Oct 2018 09:24:59 +0800
> Thread-Topic: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>   memory detection
> Thread-Index: AQHUZ04yDM8+MiGRvEexlNQFNV1LQKUqEAGAgAC2bHD//6XjAIABoN0w
> In-Reply-To: <7f42fbf3-1cd8-6ce9-1257-731cab01a07c@intel.com>
> Accept-Language: en-US
> Content-Language: en-US
> X-MS-Has-Attach:
> X-MS-TNEF-Correlator:
> x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZTQyYmM5MjItOTFjMC00OWYxLTkxNGUtOTEzYTc4NDgxZTA5IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiXC9YeWlQS1lpWlRBTEdTUUtpV3ZhYVkrVjJnc3E1YkZiQnpzZXppdkpJdjA2VVU0UFwvNFZQYmQwZ3BPTGRhUWtsIn0=
> x-ctpclassification: CTP_NT
> dlp-product: dlpe-windows
> dlp-version: 11.0.400.15
> dlp-reaction: no-action
> x-originating-ip: [10.239.127.40]
> X-BeenThere: edk2-devel@lists.01.org
> X-Mailman-Version: 2.1.29
> Precedence: list
> List-Id: EDK II Development  <edk2-devel.lists.01.org>
> List-Unsubscribe: <https://lists.01.org/mailman/options/edk2-devel>,
>   <mailto:edk2-devel-request@lists.01.org?subject=unsubscribe>
> List-Archive: <http://lists.01.org/pipermail/edk2-devel/>
> List-Post: <mailto:edk2-devel@lists.01.org>
> List-Help: <mailto:edk2-devel-request@lists.01.org?subject=help>
> List-Subscribe: <https://lists.01.org/mailman/listinfo/edk2-devel>,
>   <mailto:edk2-devel-request@lists.01.org?subject=subscribe>
> Errors-To: edk2-devel-bounces@lists.01.org
> Original-Sender: "edk2-devel" <edk2-devel-bounces@lists.01.org>
> Xref: news.gmane.org gmane.comp.bios.edk2.devel:46520
> Archived-At: <http://permalink.gmane.org/gmane.comp.bios.edk2.devel/46520>
> 
> Star,
> 
> Regards,
> Jian
> 
> 
>> -----Original Message-----
>> From: Zeng, Star
>> Sent: Monday, October 22, 2018 4:24 PM
>> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
>> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
>> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>> memory detection
>>
>> On 2018/10/22 15:12, Wang, Jian J wrote:
>>> Star,
>>>
>>> Thanks for the comment. Please see my opinions below.
>>>
>>> Regards,
>>> Jian
>>>
>>>
>>>> -----Original Message-----
>>>> From: Zeng, Star
>>>> Sent: Monday, October 22, 2018 10:53 AM
>>>> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
>>>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>>>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
>>>> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
>>>> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>>>> memory detection
>>>>
>>>> On 2018/10/19 9:50, Jian J Wang wrote:
>>>>> UAF (Use-After-Free) memory detection is new feature introduced to
>>>>> detect illegal access to memory which has been freed. The principle
>>>>> behind is similar to heap guard feature, that is the core turn all pool
>>>>> memory allocation to page allocation and mark them to be not-present
>>>>> once they are freed.
>>>>>
>>>>> This also implies that, once a page is allocated and freed, it cannot
>>>>> be re-allocated. This will bring another issue, which is that there's
>>>>> risk that memory space will be used out. To address it, the memory
>>>>> service add logic to put part (at most 64 pages a time) of freed pages
>>>>> back into page pool, so that the memory service can still have memory
>>>>> to allocate, when all memory space have been allocated once. This is
>>>>> called memory promotion. The promoted pages are always from the eldest
>>>>> pages which haven been freed.
>>>>>
>>>>> To use this feature, one can simply set following PCD to 1
>>>>>
>>>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>>>>>
>>>>> Please note this feature cannot be used with heap guard feature controlled
>>>>> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
>>>>>
>>>>> Cc: Star Zeng <star.zeng@intel.com>
>>>>> Cc: Michael D Kinney <michael.d.kinney@intel.com>
>>>>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>>>>> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
>>>>> Cc: Laszlo Ersek <lersek@redhat.com>
>>>>> Contributed-under: TianoCore Contribution Agreement 1.1
>>>>> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
>>>>
>>>> I did not review the whole patch thoroughly. But I suggest that putting
>>>> the code for UAF in a separated file if it is feasible. I am aware that
>>>> UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
>>>> HeapGuard.c and GuardPage.c (this file name is just for example)?
>>>>
>>>
>>> I think we can take the UAF as a special kind of heap guard: guarding the
>>> freed heap memory instead of guarding the allocated heap memory. So
>>> UAF is a complement of Heap Guard or part of Heap Guard. From this
>>> perspective, it's not necessary to put it in separate file.
>>
>> Very good point. From this perspective, I agree.
>> Then I would like to suggest not introducing a new PCD
>> PcdUseAfterFreeDetectionPropertyMask, but use one BIT of
>> PcdHeapGuardPropertyMask for it since we think it is part of Heap Guard.
>> The benefits are
>> 1. No confusion between Heap Guard and Use After Free.
>> 2. No need introduce new PCD.
>> 3. Can reuse BIT6 - Enable non-stop mode.
>> 4. No need update DxeIplPeim to enable IA32 PAE paging.
>>
> 
> Good idea. I'll try it. Thanks.
> 
>>>
>>>>> ---
>>>>>     MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
>>>>>     MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
>>>>>     MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
>>>>>     MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
>>>> ++++++++++++++++++++++++++-
>>>>>     MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
>>>>>     MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
>>>>>     MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
>>>>>     MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
>>>>>     8 files changed, 519 insertions(+), 26 deletions(-)
>>>>>
>>>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
>>>> b/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> index 2dec9da5e3..ae75cc5b25 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
>> ANY
>>>> KIND, EITHER EXPRESS OR IMPLIED.
>>>>>     #include <Library/DxeServicesLib.h>
>>>>>     #include <Library/DebugAgentLib.h>
>>>>>     #include <Library/CpuExceptionHandlerLib.h>
>>>>> +#include <Library/DebugPrintErrorLevelLib.h>
>>>>>
>>>>>
>>>>>     //
>>>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>> b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> index 10375443c0..d91258c087 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> @@ -198,6 +198,7 @@
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
>> ##
>>>> CONSUMES
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
>>>> ## CONSUMES
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ##
>>>> CONSUMES
>>>>> +
>>>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>>>> ## CONSUMES
>>>>>
>>>>>     # [Hob]
>>>>>     # RESOURCE_DESCRIPTOR   ## CONSUMES
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> index d9c65a8045..e43ec74010 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
>>>>>         EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
>>>>>         UINTN                            Index;
>>>>>
>>>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>>>> +      return;
>>>>> +    }
>>>>
>>>> I am aware the purpose of this change to optimize some code when
>>>> DEBUG_GCD is not set.
>>>> But I do not suggest we newly introduce the dependency to
>>>> DebugPrintErrorLevelLib, I think this check should be hidden in the
>>>> instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL
>> (this
>>>> macro name is just for example) can be introduced if we really want to
>>>> do that.
>>>>
>>>
>>> There're two purpose here, one is for optimization but another is to fix a true
>>> issue here: GCD memory lock reentrance (see below for details)
>>>
>>>      CoreDumpGcdMemorySpaceMap()
>>> => CoreGetMemorySpaceMap()
>>> => CoreAcquireGcdMemoryLock () *
>>>      AllocatePool()
>>> => InternalAllocatePool()
>>> => CoreAllocatePool()
>>> => CoreAllocatePoolI()
>>> => CoreAllocatePoolPagesI()
>>> => CoreAllocatePoolPages()
>>> => FindFreePages()
>>> => PromoteMemoryResource()
>>> => CoreAcquireGcdMemoryLock() **
>>>
>>> If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix
>> above issue
>>> in a different way. For example, change CoreGetMemorySpaceMap() to avoid
>> lock
>>> conflict, if possible. In addition, we could set PcdFixedDebugPrintErrorLevel to
>> control
>>> the DEBUG_GCD print.
>>
>> Got it, thanks.
>> I still not suggest using GetDebugPrintErrorLevel with new
>> DebugPrintErrorLevelLib dependency.
>> Seemingly, the problem even could not be resolved thoroughly because it
>> may still happen when DEBUG_GCD is set, right?
>>
>> A possible solution is adding CoreAcquireGcdMemoryLockOrFail, updating
>> PromoteMemoryResource to use CoreAcquireGcdMemoryLockOrFail, and
>> updating the ASSERT in CoreDumpGcdMemorySpaceMap with error check.
>> Maybe there is other good idea.
>>
> 
> The actual issue is that, In the code of CoreGetMemorySpaceMap(), AllocatePool()
> is inside the GCD lock scope. Maybe we could move it out of the lock. Do you
> know if AllocatePool will change GCD memory map? I can't see it from code.
> But maybe I miss something.
> 
>    CoreAcquireGcdMemoryLock ();
>    ...
>    ...
>    *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>    ...
>    ...
> Done:
>    CoreReleaseGcdMemoryLock ();
> 
> =======>
> 
>    CoreAcquireGcdMemoryLock ();
>    ...
>    CoreReleaseGcdMemoryLock ();
>    ...
>    *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>    ...
>    CoreAcquireGcdMemoryLock ();
>    ...
> Done:
>    CoreReleaseGcdMemoryLock ();

Good idea. But we may need to recalculate the NumberOfDescriptors in the 
lock and check whether it has been updated and needs to reallocatepool 
or not.

Thanks,
Star

> 
>>
>> Thanks,
>> Star
>>
>>>
>>>>
>>>> Thanks,
>>>> Star
>>>>
>>>>> +
>>>>>         Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
>>>> &MemorySpaceMap);
>>>>>         ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
>>>>>
>>>>> @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
>>>>>         EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
>>>>>         UINTN                        Index;
>>>>>
>>>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>>>> +      return;
>>>>> +    }
>>>>> +
>>>>>         Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
>>>>>         ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
>>>>>
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> index 663f969c0d..b120c04f8f 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>>>> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>>>     GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>>>> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>>>                                         = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>>>>>
>>>>> +//
>>>>> +// Used for Use-After-Free memory detection to promote freed but not
>> used
>>>> pages.
>>>>> +//
>>>>> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
>>>> mLastPromotedPage = BASE_4GB;
>>>>> +
>>>>>     /**
>>>>>       Set corresponding bits in bitmap table to 1 according to the address.
>>>>>
>>>>> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
>>>>>
>>>>>       @return An integer containing the guarded memory bitmap.
>>>>>     **/
>>>>> -UINTN
>>>>> +UINT64
>>>>>     GetGuardedMemoryBits (
>>>>>       IN EFI_PHYSICAL_ADDRESS    Address,
>>>>>       IN UINTN                   NumberOfPages
>>>>> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
>>>>>     {
>>>>>       UINT64            *BitMap;
>>>>>       UINTN             Bits;
>>>>> -  UINTN             Result;
>>>>> +  UINT64            Result;
>>>>>       UINTN             Shift;
>>>>>       UINTN             BitsToUnitEnd;
>>>>>
>>>>> @@ -1203,6 +1208,373 @@ SetAllGuardPages (
>>>>>       }
>>>>>     }
>>>>>
>>>>> +/**
>>>>> +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
>>>>> +
>>>>> +  @return TRUE/FALSE.
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +IsUafEnabled (
>>>>> +  VOID
>>>>> +  )
>>>>> +{
>>>>> +  ASSERT (!IsHeapGuardEnabled());
>>>>> +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Find the address of top-most guarded free page.
>>>>> +
>>>>> +  @param[out]  Address    Start address of top-most guarded free page.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +GetLastGuardedFreePageAddress (
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *Address
>>>>> +  )
>>>>> +{
>>>>> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
>>>>> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
>>>>> +  UINTN                   Level;
>>>>> +  UINT64                  Map;
>>>>> +  INTN                    Index;
>>>>> +
>>>>> +  ASSERT (mMapLevel >= 1);
>>>>> +
>>>>> +  BaseAddress = 0;
>>>>> +  Map = mGuardedMemoryMap;
>>>>> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
>>>>> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
>>>>> +       ++Level) {
>>>>> +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
>>>>> +
>>>>> +    //
>>>>> +    // Find the non-NULL entry at largest index.
>>>>> +    //
>>>>> +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
>>>>> +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
>>>>> +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
>>>>> +        Map = ((UINT64 *)(UINTN)Map)[Index];
>>>>> +        break;
>>>>> +      }
>>>>> +    }
>>>>> +  }
>>>>> +
>>>>> +  //
>>>>> +  // Find the non-zero MSB then get the page address.
>>>>> +  //
>>>>> +  while (Map != 0) {
>>>>> +    Map = RShiftU64 (Map, 1);
>>>>> +    BaseAddress += EFI_PAGES_TO_SIZE (1);
>>>>> +  }
>>>>> +
>>>>> +  *Address = BaseAddress;
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Record freed pages.
>>>>> +
>>>>> +  @param[in]  BaseAddress   Base address of just freed pages.
>>>>> +  @param[in]  Pages         Number of freed pages.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +MarkFreedPages (
>>>>> +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
>>>>> +  IN UINTN                    Pages
>>>>> +  )
>>>>> +{
>>>>> +  SetGuardedMemoryBits (BaseAddress, Pages);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Record freed pages as well as mark them as not-present, if possible.
>>>>> +
>>>>> +  @param[in]  BaseAddress   Base address of just freed pages.
>>>>> +  @param[in]  Pages         Number of freed pages.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +EFIAPI
>>>>> +GuardFreedPages (
>>>>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
>>>>> +  IN  UINTN                   Pages
>>>>> +  )
>>>>> +{
>>>>> +  EFI_STATUS      Status;
>>>>> +
>>>>> +  //
>>>>> +  // Do nothing if
>>>>> +  // - Use-After-Free detection is not enabled
>>>>> +  // - Freed memory is legacy memory lower than 1MB
>>>>> +  //
>>>>> +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>> +  MarkFreedPages (BaseAddress, Pages);
>>>>> +  if (gCpu != NULL) {
>>>>> +    //
>>>>> +    // Set flag to make sure allocating memory without GUARD for page
>> table
>>>>> +    // operation; otherwise infinite loops could be caused.
>>>>> +    //
>>>>> +    mOnGuarding = TRUE;
>>>>> +    //
>>>>> +    // Note: This might overwrite other attributes needed by other features,
>>>>> +    // such as NX memory protection.
>>>>> +    //
>>>>> +    Status = gCpu->SetMemoryAttributes (
>>>>> +                     gCpu,
>>>>> +                     BaseAddress,
>>>>> +                     EFI_PAGES_TO_SIZE(Pages),
>>>>> +                     EFI_MEMORY_RP
>>>>> +                     );
>>>>> +    //
>>>>> +    // Normally we should check the returned Status. But there might be
>>>> memory
>>>>> +    // alloc/free involved in SetMemoryAttributes(), which would fail this
>>>>> +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
>>>>> +    //
>>>>> +    if (EFI_ERROR (Status)) {
>>>>> +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n",
>>>> BaseAddress, Pages));
>>>>> +    }
>>>>> +    mOnGuarding = FALSE;
>>>>> +  }
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Mark all pages freed before CPU Arch Protocol as not-present.
>>>>> +
>>>>> +**/
>>>>> +VOID
>>>>> +GuardAllFreedPages (
>>>>> +  VOID
>>>>> +  )
>>>>> +{
>>>>> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINT64    TableEntry;
>>>>> +  UINT64    Address;
>>>>> +  UINT64    GuardPage;
>>>>> +  INTN      Level;
>>>>> +  UINTN     BitIndex;
>>>>> +  UINTN     GuardPageNumber;
>>>>> +
>>>>> +  if (mGuardedMemoryMap == 0 ||
>>>>> +      mMapLevel == 0 ||
>>>>> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
>>>>> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
>>>>> +
>>>>> +  SetMem (Tables, sizeof(Tables), 0);
>>>>> +  SetMem (Addresses, sizeof(Addresses), 0);
>>>>> +  SetMem (Indices, sizeof(Indices), 0);
>>>>> +
>>>>> +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
>>>>> +  Tables[Level]   = mGuardedMemoryMap;
>>>>> +  Address         = 0;
>>>>> +  GuardPage       = (UINT64)-1;
>>>>> +  GuardPageNumber = 0;
>>>>> +
>>>>> +  while (TRUE) {
>>>>> +    if (Indices[Level] > Entries[Level]) {
>>>>> +      Tables[Level] = 0;
>>>>> +      Level        -= 1;
>>>>> +    } else {
>>>>> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
>>>>> +      Address     = Addresses[Level];
>>>>> +
>>>>> +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
>>>>> +        Level            += 1;
>>>>> +        Tables[Level]     = TableEntry;
>>>>> +        Addresses[Level]  = Address;
>>>>> +        Indices[Level]    = 0;
>>>>> +
>>>>> +        continue;
>>>>> +      } else {
>>>>> +        BitIndex = 1;
>>>>> +        while (BitIndex != 0) {
>>>>> +          if ((TableEntry & BitIndex) != 0) {
>>>>> +            if (GuardPage == (UINT64)-1) {
>>>>> +              GuardPage = Address;
>>>>> +            }
>>>>> +            ++GuardPageNumber;
>>>>> +          } else if (GuardPageNumber > 0) {
>>>>> +            GuardFreedPages (GuardPage, GuardPageNumber);
>>>>> +            GuardPageNumber = 0;
>>>>> +            GuardPage       = (UINT64)-1;
>>>>> +          }
>>>>> +
>>>>> +          if (TableEntry == 0) {
>>>>> +            break;
>>>>> +          }
>>>>> +
>>>>> +          Address += EFI_PAGES_TO_SIZE (1);
>>>>> +          BitIndex = LShiftU64(BitIndex, 1);
>>>>> +        }
>>>>> +      }
>>>>> +    }
>>>>> +
>>>>> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
>>>>> +      break;
>>>>> +    }
>>>>> +
>>>>> +    Indices[Level] += 1;
>>>>> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
>>>>> +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
>>>>> +
>>>>> +  }
>>>>> +
>>>>> +  //
>>>>> +  // Update the maximum address of freed page which can be used for
>>>> memory
>>>>> +  // promotion upon out-of-memory-space.
>>>>> +  //
>>>>> +  GetLastGuardedFreePageAddress (&Address);
>>>>> +  if (Address != 0) {
>>>>> +    mLastPromotedPage = Address;
>>>>> +  }
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  This function checks to see if the given memory map descriptor in a
>> memory
>>>> map
>>>>> +  can be merged with any guarded free pages.
>>>>> +
>>>>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
>>>>> +  @param  MaxAddress        Maximum address to stop the merge.
>>>>> +
>>>>> +  @return VOID
>>>>> +
>>>>> +**/
>>>>> +VOID
>>>>> +MergeGuardPages (
>>>>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
>>>>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
>>>>> +  )
>>>>> +{
>>>>> +  EFI_PHYSICAL_ADDRESS        EndAddress;
>>>>> +  UINT64                      Bitmap;
>>>>> +  INTN                        Pages;
>>>>> +
>>>>> +  if (!IsUafEnabled () ||
>>>>> +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>> +  Bitmap = 0;
>>>>> +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
>>>>> PhysicalStart);
>>>>> +  Pages -= MemoryMapEntry->NumberOfPages;
>>>>> +  while (Pages > 0) {
>>>>> +    if (Bitmap == 0) {
>>>>> +      EndAddress = MemoryMapEntry->PhysicalStart +
>>>>> +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
>>>>> +      Bitmap = GetGuardedMemoryBits(EndAddress,
>>>> GUARDED_HEAP_MAP_ENTRY_BITS);
>>>>> +    }
>>>>> +
>>>>> +    if ((Bitmap & 1) == 0) {
>>>>> +      break;
>>>>> +    }
>>>>> +
>>>>> +    Pages--;
>>>>> +    MemoryMapEntry->NumberOfPages++;
>>>>> +    Bitmap = RShiftU64 (Bitmap, 1);
>>>>> +  }
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Put part (at most 64 pages a time) guarded free pages back to free page
>>>> pool.
>>>>> +
>>>>> +  Use-After-Free detection makes use of 'Used then throw away' way to
>>>> detect
>>>>> +  any illegal access to freed memory. The thrown-away memory will be
>>>> marked as
>>>>> +  not-present so that any access to those memory (after free) will trigger
>>>>> +  page-fault.
>>>>> +
>>>>> +  The problem is that this will consume lots of memory space. Once no
>>>> memory
>>>>> +  left in pool to allocate, we have to restore part of the freed pages to their
>>>>> +  normal function. Otherwise the whole system will stop functioning.
>>>>> +
>>>>> +  @param  StartAddress    Start address of promoted memory.
>>>>> +  @param  EndAddress      End address of promoted memory.
>>>>> +
>>>>> +  @return TRUE    Succeeded to promote memory.
>>>>> +  @return FALSE   No free memory found.
>>>>> +
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +PromoteGuardedFreePages (
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
>>>>> +  )
>>>>> +{
>>>>> +  EFI_STATUS              Status;
>>>>> +  UINTN                   AvailablePages;
>>>>> +  UINT64                  Bitmap;
>>>>> +  EFI_PHYSICAL_ADDRESS    Start;
>>>>> +
>>>>> +  if (!IsUafEnabled ()) {
>>>>> +    return FALSE;
>>>>> +  }
>>>>> +
>>>>> +  //
>>>>> +  // Similar to memory allocation service, always search the freed pages in
>>>>> +  // descending direction.
>>>>> +  //
>>>>> +  Start           = mLastPromotedPage;
>>>>> +  AvailablePages  = 0;
>>>>> +  while (AvailablePages == 0) {
>>>>> +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
>>>>> +    //
>>>>> +    // If the address wraps around, try the really freed pages at top.
>>>>> +    //
>>>>> +    if (Start > mLastPromotedPage) {
>>>>> +      GetLastGuardedFreePageAddress (&Start);
>>>>> +      ASSERT (Start != 0);
>>>>> +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
>>>>> +    }
>>>>> +
>>>>> +    Bitmap = GetGuardedMemoryBits(Start,
>>>> GUARDED_HEAP_MAP_ENTRY_BITS);
>>>>> +    while (Bitmap > 0) {
>>>>> +      if ((Bitmap & 1) != 0) {
>>>>> +        ++AvailablePages;
>>>>> +      } else if (AvailablePages == 0) {
>>>>> +        Start += EFI_PAGES_TO_SIZE (1);
>>>>> +      } else {
>>>>> +        break;
>>>>> +      }
>>>>> +
>>>>> +      Bitmap = RShiftU64(Bitmap, 1);
>>>>> +    }
>>>>> +  }
>>>>> +
>>>>> +  if (AvailablePages) {
>>>>> +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start,
>>>> AvailablePages));
>>>>> +    if (gCpu != NULL) {
>>>>> +      //
>>>>> +      // Set flag to make sure allocating memory without GUARD for page
>> table
>>>>> +      // operation; otherwise infinite loops could be caused.
>>>>> +      //
>>>>> +      mOnGuarding = TRUE;
>>>>> +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
>>>> EFI_PAGES_TO_SIZE(AvailablePages), 0);
>>>>> +      ASSERT_EFI_ERROR (Status);
>>>>> +      mOnGuarding = FALSE;
>>>>> +    }
>>>>> +
>>>>> +    mLastPromotedPage = Start;
>>>>> +    *StartAddress     = Start;
>>>>> +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
>>>>> +    return TRUE;
>>>>> +  }
>>>>> +
>>>>> +  return FALSE;
>>>>> +}
>>>>> +
>>>>>     /**
>>>>>       Notify function used to set all Guard pages before CPU Arch Protocol
>>>> installed.
>>>>>     **/
>>>>> @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
>>>>>       )
>>>>>     {
>>>>>       ASSERT (gCpu != NULL);
>>>>> -  SetAllGuardPages ();
>>>>> +
>>>>> +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
>>>>> +    SetAllGuardPages ();
>>>>> +  }
>>>>> +
>>>>> +  if (IsUafEnabled ()) {
>>>>> +    GuardAllFreedPages ();
>>>>> +  }
>>>>>     }
>>>>>
>>>>>     /**
>>>>> @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
>>>>>       CHAR8     *Ruler1;
>>>>>       CHAR8     *Ruler2;
>>>>>
>>>>> +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>> +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>>       if (mGuardedMemoryMap == 0 ||
>>>>>           mMapLevel == 0 ||
>>>>>           mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>>> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>>>> index 8c34692439..3f2a5283c2 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>>>> @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
>>>>>       VOID
>>>>>       );
>>>>>
>>>>> +/**
>>>>> +  This function checks to see if the given memory map descriptor in a
>> memory
>>>> map
>>>>> +  can be merged with any guarded free pages.
>>>>> +
>>>>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
>>>>> +  @param  MaxAddress        Maximum address to stop the merge.
>>>>> +
>>>>> +  @return VOID
>>>>> +
>>>>> +**/
>>>>> +VOID
>>>>> +MergeGuardPages (
>>>>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
>>>>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
>>>>> +  );
>>>>> +
>>>>> +/**
>>>>> +  Record freed pages as well as mark them as not-present, if possible.
>>>>> +
>>>>> +  @param[in]  BaseAddress   Base address of just freed pages.
>>>>> +  @param[in]  Pages         Number of freed pages.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +EFIAPI
>>>>> +GuardFreedPages (
>>>>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
>>>>> +  IN  UINTN                   Pages
>>>>> +  );
>>>>> +
>>>>> +/**
>>>>> +  Check to see if the use-after-free (UAF) feature is enabled or not.
>>>>> +
>>>>> +  @return TRUE/FALSE.
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +IsUafEnabled (
>>>>> +  VOID
>>>>> +  );
>>>>> +
>>>>> +/**
>>>>> +  Put part (at most 64 pages a time) guarded free pages back to free page
>>>> pool.
>>>>> +
>>>>> +  Use-After-Free detection makes use of 'Used then throw away' way to
>>>> detect
>>>>> +  any illegal access to freed memory. The thrown-away memory will be
>>>> marked as
>>>>> +  not-present so that any access to those memory (after free) will trigger
>>>>> +  page-fault.
>>>>> +
>>>>> +  The problem is that this will consume lots of memory space. Once no
>>>> memory
>>>>> +  left in pool to allocate, we have to restore part of the freed pages to their
>>>>> +  normal function. Otherwise the whole system will stop functioning.
>>>>> +
>>>>> +  @param  StartAddress    Start address of promoted memory.
>>>>> +  @param  EndAddress      End address of promoted memory.
>>>>> +
>>>>> +  @return TRUE    Succeeded to promote memory.
>>>>> +  @return FALSE   No free memory found.
>>>>> +
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +PromoteGuardedFreePages (
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
>>>>> +  );
>>>>> +
>>>>>     extern BOOLEAN mOnGuarding;
>>>>>
>>>>>     #endif
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
>>>> b/MdeModulePkg/Core/Dxe/Mem/Page.c
>>>>> index 3b4cc08e7c..9f4419fa5b 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
>>>>> @@ -401,9 +401,11 @@ PromoteMemoryResource (
>>>>>       VOID
>>>>>       )
>>>>>     {
>>>>> -  LIST_ENTRY         *Link;
>>>>> -  EFI_GCD_MAP_ENTRY  *Entry;
>>>>> -  BOOLEAN            Promoted;
>>>>> +  LIST_ENTRY            *Link;
>>>>> +  EFI_GCD_MAP_ENTRY     *Entry;
>>>>> +  BOOLEAN               Promoted;
>>>>> +  EFI_PHYSICAL_ADDRESS  StartAddress;
>>>>> +  EFI_PHYSICAL_ADDRESS  EndAddress;
>>>>>
>>>>>       DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
>>>>>
>>>>> @@ -451,6 +453,24 @@ PromoteMemoryResource (
>>>>>
>>>>>       CoreReleaseGcdMemoryLock ();
>>>>>
>>>>> +  if (!Promoted) {
>>>>> +    //
>>>>> +    // If Use-After-Free detection is enabled, we might promote pages from
>>>>> +    // guarded free pages.
>>>>> +    //
>>>>> +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
>>>>> +    if (Promoted) {
>>>>> +      CoreAcquireGcdMemoryLock ();
>>>>> +      CoreAddRange (
>>>>> +        EfiConventionalMemory,
>>>>> +        StartAddress,
>>>>> +        EndAddress,
>>>>> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
>>>> EFI_MEMORY_WB
>>>>> +        );
>>>>> +      CoreReleaseGcdMemoryLock ();
>>>>> +    }
>>>>> +  }
>>>>> +
>>>>>       return Promoted;
>>>>>     }
>>>>>     /**
>>>>> @@ -896,9 +916,13 @@ CoreConvertPagesEx (
>>>>>         }
>>>>>
>>>>>         //
>>>>> -    // Add our new range in
>>>>> +    // Add our new range in. Don't do this for freed pages if Use-After-Free
>>>>> +    // detection is enabled.
>>>>>         //
>>>>> -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
>>>>> +    if (!IsUafEnabled () || !ChangingType || MemType !=
>>>> EfiConventionalMemory) {
>>>>> +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
>>>>> +    }
>>>>> +
>>>>>         if (ChangingType && (MemType == EfiConventionalMemory)) {
>>>>>           //
>>>>>           // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
>>>> this
>>>>> @@ -1514,6 +1538,7 @@ CoreFreePages (
>>>>>
>>>>>       Status = CoreInternalFreePages (Memory, NumberOfPages,
>> &MemoryType);
>>>>>       if (!EFI_ERROR (Status)) {
>>>>> +    GuardFreedPages (Memory, NumberOfPages);
>>>>>         CoreUpdateProfile (
>>>>>           (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
>>>>>           MemoryProfileActionFreePages,
>>>>> @@ -1908,9 +1933,7 @@ Done:
>>>>>       *MemoryMapSize = BufferSize;
>>>>>
>>>>>       DEBUG_CODE (
>>>>> -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
>>>>> -      DumpGuardedMemoryBitmap ();
>>>>> -    }
>>>>> +    DumpGuardedMemoryBitmap ();
>>>>>       );
>>>>>
>>>>>       return Status;
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>>> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>>>> index 1ff2061f7f..0cd74267b3 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>>>> @@ -26,7 +26,8 @@ typedef struct {
>>>>>     } POOL_FREE;
>>>>>
>>>>>
>>>>> -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
>>>>> +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
>>>>> +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
>>>>>     typedef struct {
>>>>>       UINT32          Signature;
>>>>>       UINT32          Reserved;
>>>>> @@ -367,6 +368,7 @@ CoreAllocatePoolI (
>>>>>       UINTN       NoPages;
>>>>>       UINTN       Granularity;
>>>>>       BOOLEAN     HasPoolTail;
>>>>> +  BOOLEAN     PageAsPool;
>>>>>
>>>>>       ASSERT_LOCKED (&mPoolMemoryLock);
>>>>>
>>>>> @@ -386,6 +388,7 @@ CoreAllocatePoolI (
>>>>>
>>>>>       HasPoolTail  = !(NeedGuard &&
>>>>>                        ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
>>>>> +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
>>>>>
>>>>>       //
>>>>>       // Adjusting the Size to be of proper alignment so that
>>>>> @@ -406,7 +409,7 @@ CoreAllocatePoolI (
>>>>>       // If allocation is over max size, just allocate pages for the request
>>>>>       // (slow)
>>>>>       //
>>>>> -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
>>>>> +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
>>>>>         if (!HasPoolTail) {
>>>>>           Size -= sizeof (POOL_TAIL);
>>>>>         }
>>>>> @@ -498,7 +501,7 @@ Done:
>>>>>         //
>>>>>         // If we have a pool buffer, fill in the header & tail info
>>>>>         //
>>>>> -    Head->Signature = POOL_HEAD_SIGNATURE;
>>>>> +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
>>>> POOL_HEAD_SIGNATURE;
>>>>>         Head->Size      = Size;
>>>>>         Head->Type      = (EFI_MEMORY_TYPE) PoolType;
>>>>>         Buffer          = Head->Data;
>>>>> @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
>>>>>       CoreFreePoolPages (Memory, NoPages);
>>>>>       CoreReleaseMemoryLock ();
>>>>>
>>>>> +  GuardFreedPages (Memory, NoPages);
>>>>>       ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
>>>>>         (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE
>> (NoPages));
>>>>>     }
>>>>> @@ -685,15 +689,19 @@ CoreFreePoolI (
>>>>>       UINTN       Granularity;
>>>>>       BOOLEAN     IsGuarded;
>>>>>       BOOLEAN     HasPoolTail;
>>>>> +  BOOLEAN     PageAsPool;
>>>>>
>>>>>       ASSERT(Buffer != NULL);
>>>>>       //
>>>>>       // Get the head & tail of the pool entry
>>>>>       //
>>>>> -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
>>>>> +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
>>>>>       ASSERT(Head != NULL);
>>>>>
>>>>> -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
>>>>> +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
>>>>> +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
>>>>> +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
>>>>> +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>>>>>         return EFI_INVALID_PARAMETER;
>>>>>       }
>>>>>
>>>>> @@ -701,6 +709,7 @@ CoreFreePoolI (
>>>>>                     IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
>>>>>       HasPoolTail = !(IsGuarded &&
>>>>>                       ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
>>>>> +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>>>>>
>>>>>       if (HasPoolTail) {
>>>>>         Tail = HEAD_TO_TAIL (Head);
>>>>> @@ -757,7 +766,7 @@ CoreFreePoolI (
>>>>>       //
>>>>>       // If it's not on the list, it must be pool pages
>>>>>       //
>>>>> -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
>>>>> +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
>>>>>
>>>>>         //
>>>>>         // Return the memory pages back to free memory
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>>> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>>>> index 05eb4f422b..f6595c90ed 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>>>> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
>> ANY
>>>> KIND, EITHER EXPRESS OR IMPLIED.
>>>>>     #include <Guid/PropertiesTable.h>
>>>>>
>>>>>     #include "DxeMain.h"
>>>>> +#include "HeapGuard.h"
>>>>>
>>>>>     #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
>>>>>       ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
>>>>> @@ -205,16 +206,13 @@ MergeMemoryMap (
>>>>>         NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
>> (MemoryMapEntry,
>>>> DescriptorSize);
>>>>>
>>>>>         do {
>>>>> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
>>>>> NumberOfPages));
>>>>> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
>>>>> PhysicalStart);
>>>>> +      MemoryBlockLength = (UINT64) (EfiPagesToSize
>> (NewMemoryMapEntry-
>>>>> NumberOfPages));
>>>>>           if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
>>>>> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
>>>>> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
>> &&
>>>>> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
>>>> NextMemoryMapEntry->PhysicalStart)) {
>>>>> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
>>>>> NumberOfPages;
>>>>> -        if (NewMemoryMapEntry != MemoryMapEntry) {
>>>>> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
>>>>> NumberOfPages;
>>>>> -        }
>>>>> -
>>>>> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
>>>>> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry-
>>> Attribute)
>>>> &&
>>>>> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
>>>> NextMemoryMapEntry->PhysicalStart)) {
>>>>> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
>>>>> NumberOfPages;
>>>>>             NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
>>>> (NextMemoryMapEntry, DescriptorSize);
>>>>>             continue;
>>>>>           } else {
>>>>>



^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-10-23  3:15 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-10-19  1:50 [PATCH 0/3] Add use-after-free memory detection Jian J Wang
2018-10-19  1:50 ` [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature Jian J Wang
2018-10-19 11:27   ` Laszlo Ersek
2018-10-22  2:20   ` Zeng, Star
2018-10-19  1:50 ` [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
2018-10-19 11:45   ` Laszlo Ersek
2018-10-22  7:23     ` Wang, Jian J
2018-10-19  1:50 ` [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection Jian J Wang
2018-10-19 12:04   ` Laszlo Ersek
2018-10-22  7:34     ` Wang, Jian J
2018-10-22  2:53   ` Zeng, Star
2018-10-22  7:12     ` Wang, Jian J
2018-10-22  8:23       ` Zeng, Star
2018-10-23  1:24         ` Wang, Jian J
2018-10-23  3:14           ` Zeng, Star
2018-10-19  1:56 ` [PATCH 0/3] Add " Wang, Jian J

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox