public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Add freed-memory guard feature
@ 2018-10-23 14:53 Jian J Wang
  2018-10-23 14:53 ` [PATCH v2 1/5] MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature Jian J Wang
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Jian J Wang @ 2018-10-23 14:53 UTC (permalink / raw)
  To: edk2-devel

> v2 changes:
> a. Drop PCD PcdUseAfterFreeDetectionPropertyMask. Use BIT4 of
>    PcdHeapGuardPropertyMask instead. Add more descriptions about
>    the new usage in dec/uni file as well.
> b. Use global of BOOLEAN other than EFI_LOCK to avoid reentrance
>    of calling InitializePageTablePool()
> c. Update implementation of CoreGetMemorySpaceMap() and 
>    CoreGetIoSpaceMap() to avoid lock failure. Drop the code to
>    detect debug print level used to achieve the same effect.
> d. Change prototype and implementation of IsHeapGuardEnabled()
>    to allow it to check freed-memory guard feature.
> e. Move the sanity check of freed-memory guard and heap guard
>    into HeapGuardCpuArchProtocolNotify()
> f. Add GuardFreedPagesChecked() to avoid duplicate feature check
> g. Split patch series into smaller patch files

Freed-memory guard is a new feauture used to detect UAF (Use-After-Free)
memory issue.


Jian J Wang (5):
  MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature
  UefiCpuPkg/CpuDxe: fix an infinite loop issue
  MdeModulePkg/Core: fix a lock issue in GCD memory map dump
  MdeModulePkg/Core: add freed-memory guard feature
  MdeModulePkg/Core: fix-up for changes introduced by freed-memory guard

 MdeModulePkg/Core/Dxe/Gcd/Gcd.c               | 140 +++++----
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409 +++++++++++++++++++++++++-
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  63 +++-
 MdeModulePkg/Core/Dxe/Mem/Page.c              |  41 ++-
 MdeModulePkg/Core/Dxe/Mem/Pool.c              |  21 +-
 MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
 MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  16 +-
 MdeModulePkg/MdeModulePkg.dec                 |  10 +
 MdeModulePkg/MdeModulePkg.uni                 |   6 +-
 UefiCpuPkg/CpuDxe/CpuDxe.h                    |   2 +-
 UefiCpuPkg/CpuDxe/CpuPageTable.c              |  19 +-
 11 files changed, 640 insertions(+), 89 deletions(-)

-- 
2.16.2.windows.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/5] MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature
  2018-10-23 14:53 [PATCH v2 0/5] Add freed-memory guard feature Jian J Wang
@ 2018-10-23 14:53 ` Jian J Wang
  2018-10-23 16:09   ` Laszlo Ersek
  2018-10-23 14:53 ` [PATCH v2 2/5] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Jian J Wang @ 2018-10-23 14:53 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v2 changes:
> a. Drop PCD PcdUseAfterFreeDetectionPropertyMask. Use BIT4 of
>    PcdHeapGuardPropertyMask instead. Update related descriptions in both
>    dec and uni files.

Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
which is illegal access to memory which has been freed. The principle
behind is similar to heap guard feature, that is we'll turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.

This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, this patch
series add logic put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages freed.

BIT4 of following PCD is used to enable the freed-memory guard feature.

  gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask

Please note this feature cannot be enabled with heap guard at the same
time.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/MdeModulePkg.dec | 10 ++++++++++
 MdeModulePkg/MdeModulePkg.uni |  6 +++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
index 6037504fa7..255b92ea67 100644
--- a/MdeModulePkg/MdeModulePkg.dec
+++ b/MdeModulePkg/MdeModulePkg.dec
@@ -955,6 +955,8 @@
   # free pages for all of them. The page allocation for the type related to
   # cleared bits keeps the same as ususal.
   #
+  # This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.
+  #
   # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
   #  EfiReservedMemoryType             0x0000000000000001<BR>
   #  EfiLoaderCode                     0x0000000000000002<BR>
@@ -984,6 +986,8 @@
   # if there's enough free memory for all of them. The pool allocation for the
   # type related to cleared bits keeps the same as ususal.
   #
+  # This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.
+  #
   # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
   #  EfiReservedMemoryType             0x0000000000000001<BR>
   #  EfiLoaderCode                     0x0000000000000002<BR>
@@ -1007,14 +1011,20 @@
   gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType|0x0|UINT64|0x30001053
 
   ## This mask is to control Heap Guard behavior.
+  #
   # Note that due to the limit of pool memory implementation and the alignment
   # requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee
   # that the returned pool is exactly adjacent to head guard page or tail guard
   # page.
+  #
+  # Note that UEFI freed-memory guard and pool/page guard cannot be enabled
+  # at the same time.
+  #
   #   BIT0 - Enable UEFI page guard.<BR>
   #   BIT1 - Enable UEFI pool guard.<BR>
   #   BIT2 - Enable SMM page guard.<BR>
   #   BIT3 - Enable SMM pool guard.<BR>
+  #   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>
   #   BIT6 - Enable non-stop mode.<BR>
   #   BIT7 - The direction of Guard Page for Pool Guard.
   #          0 - The returned pool is near the tail guard page.<BR>
diff --git a/MdeModulePkg/MdeModulePkg.uni b/MdeModulePkg/MdeModulePkg.uni
index a6bcb627cf..e72b893509 100644
--- a/MdeModulePkg/MdeModulePkg.uni
+++ b/MdeModulePkg/MdeModulePkg.uni
@@ -1171,6 +1171,7 @@
                                                                                         " before and after corresponding type of pages allocated if there's enough\n"
                                                                                         " free pages for all of them. The page allocation for the type related to\n"
                                                                                         " cleared bits keeps the same as ususal.\n\n"
+                                                                                        " This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.\n\n"
                                                                                         " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
                                                                                         "  EfiReservedMemoryType             0x0000000000000001\n"
                                                                                         "  EfiLoaderCode                     0x0000000000000002\n"
@@ -1198,6 +1199,7 @@
                                                                                         " before and after corresponding type of pages which the allocated pool occupies,\n"
                                                                                         " if there's enough free memory for all of them. The pool allocation for the\n"
                                                                                         " type related to cleared bits keeps the same as ususal.\n\n"
+                                                                                        " This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.\n\n"
                                                                                         " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
                                                                                         "  EfiReservedMemoryType             0x0000000000000001\n"
                                                                                         "  EfiLoaderCode                     0x0000000000000002\n"
@@ -1225,11 +1227,13 @@
                                                                                             "Note that due to the limit of pool memory implementation and the alignment\n"
                                                                                             "requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee\n"
                                                                                             "that the returned pool is exactly adjacent to head guard page or tail guard\n"
-                                                                                            "page.\n"
+                                                                                            "page.\n\n"
+                                                                                            "Note that UEFI freed-memory guard and pool/page guard cannot be enabled at the same time.\n\n"
                                                                                             "   BIT0 - Enable UEFI page guard.<BR>\n"
                                                                                             "   BIT1 - Enable UEFI pool guard.<BR>\n"
                                                                                             "   BIT2 - Enable SMM page guard.<BR>\n"
                                                                                             "   BIT3 - Enable SMM pool guard.<BR>\n"
+                                                                                            "   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>\n"
                                                                                             "   BIT7 - The direction of Guard Page for Pool Guard.\n"
                                                                                             "          0 - The returned pool is near the tail guard page.<BR>\n"
                                                                                             "          1 - The returned pool is near the head guard page.<BR>"
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/5] UefiCpuPkg/CpuDxe: fix an infinite loop issue
  2018-10-23 14:53 [PATCH v2 0/5] Add freed-memory guard feature Jian J Wang
  2018-10-23 14:53 ` [PATCH v2 1/5] MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature Jian J Wang
@ 2018-10-23 14:53 ` Jian J Wang
  2018-10-23 16:41   ` Laszlo Ersek
  2018-10-23 14:53 ` [PATCH v2 3/5] MdeModulePkg/Core: fix a lock issue in GCD memory map dump Jian J Wang
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Jian J Wang @ 2018-10-23 14:53 UTC (permalink / raw)
  To: edk2-devel
  Cc: Laszlo Ersek, Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni

> v2 changes:
> a. Change the type of mPageTablePoolLock to be BOOLEAN. Related code
>    logic is also updated and refined.
> b. Add non-stop mode for freed-memory guard feature

The freed-memory guard feature will cause an infinite calling
of InitializePageTablePool(). This is due to a fact that
AllocateAlignedPages() is used to allocate page table pool memory.
This function will most likely call gBS->FreePages to free unaligned
pages and then cause another round of page attributes change, like
below

   FreePages() <===============|
=> SetMemoryAttributes()       |
=> <if out of page table>      |
=> InitializePageTablePool()   |
=> AllocateAlignedPages()      |
=> FreePages() ================|

The solution is add a global variable as a lock in page table pool
allocation function and fail any other requests if it has not been
done.

This patch also add non-stop mode for freed-memory guard.

Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 UefiCpuPkg/CpuDxe/CpuDxe.h       |  2 +-
 UefiCpuPkg/CpuDxe/CpuPageTable.c | 19 +++++++++++++++++--
 2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/UefiCpuPkg/CpuDxe/CpuDxe.h b/UefiCpuPkg/CpuDxe/CpuDxe.h
index 064ea05bba..3183a3f7f4 100644
--- a/UefiCpuPkg/CpuDxe/CpuDxe.h
+++ b/UefiCpuPkg/CpuDxe/CpuDxe.h
@@ -58,7 +58,7 @@
                                        )
 
 #define HEAP_GUARD_NONSTOP_MODE       \
-        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT1|BIT0)) > BIT6)
+        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT4|BIT1|BIT0)) > BIT6)
 
 #define NULL_DETECTION_NONSTOP_MODE   \
         ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & (BIT6|BIT0)) > BIT6)
diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTable.c
index 33e8ee2d2c..b7beaf935b 100644
--- a/UefiCpuPkg/CpuDxe/CpuPageTable.c
+++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c
@@ -100,6 +100,7 @@ PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
 };
 
 PAGE_TABLE_POOL                   *mPageTablePool = NULL;
+BOOLEAN                           mPageTablePoolLock = FALSE;
 PAGE_TABLE_LIB_PAGING_CONTEXT     mPagingContext;
 EFI_SMM_BASE2_PROTOCOL            *mSmmBase2 = NULL;
 
@@ -1046,6 +1047,16 @@ InitializePageTablePool (
   VOID                      *Buffer;
   BOOLEAN                   IsModified;
 
+  //
+  // Do not allow re-entrance.
+  //
+  if (mPageTablePoolLock) {
+    return FALSE;
+  }
+
+  mPageTablePoolLock = TRUE;
+  IsModified = FALSE;
+
   //
   // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one page for
   // header.
@@ -1056,7 +1067,9 @@ InitializePageTablePool (
   Buffer = AllocateAlignedPages (PoolPages, PAGE_TABLE_POOL_ALIGNMENT);
   if (Buffer == NULL) {
     DEBUG ((DEBUG_ERROR, "ERROR: Out of aligned pages\r\n"));
-    return FALSE;
+    goto Done;
+  } else {
+    DEBUG ((DEBUG_INFO, "Paging: added %ld pages to page table pool\r\n", (UINT64)PoolPages));
   }
 
   //
@@ -1092,7 +1105,9 @@ InitializePageTablePool (
     );
   ASSERT (IsModified == TRUE);
 
-  return TRUE;
+Done:
+  mPageTablePoolLock = FALSE;
+  return IsModified;
 }
 
 /**
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/5] MdeModulePkg/Core: fix a lock issue in GCD memory map dump
  2018-10-23 14:53 [PATCH v2 0/5] Add freed-memory guard feature Jian J Wang
  2018-10-23 14:53 ` [PATCH v2 1/5] MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature Jian J Wang
  2018-10-23 14:53 ` [PATCH v2 2/5] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
@ 2018-10-23 14:53 ` Jian J Wang
  2018-10-23 18:26   ` Laszlo Ersek
  2018-10-23 14:53 ` [PATCH v2 4/5] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
  2018-10-23 14:53 ` [PATCH v2 5/5] MdeModulePkg/Core: fix-up for changes introduced by freed-memory guard Jian J Wang
  4 siblings, 1 reply; 12+ messages in thread
From: Jian J Wang @ 2018-10-23 14:53 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v2 changes:
> a. Update implementation of CoreGetMemorySpaceMap() and 
>    CoreGetIoSpaceMap() to avoid lock failure. Drop the code to
>    detect debug print level used to achieve the same effect.

This issue is hidden in current code but exposed by introduction
of freed-memory guard feature due to the fact that the feature
will turn all pool allocation to page allocation.

The solution is move the memory allocation in CoreGetMemorySpaceMap()
and CoreGetIoSpaceMap() to be out of the GCD memory map lock.

   CoreDumpGcdMemorySpaceMap()
=> CoreGetMemorySpaceMap()
=> CoreAcquireGcdMemoryLock () *
   AllocatePool()
=> InternalAllocatePool()
=> CoreAllocatePool()
=> CoreAllocatePoolI()
=> CoreAllocatePoolPagesI()
=> CoreAllocatePoolPages()
=> FindFreePages()
=> PromoteMemoryResource()
=> CoreAcquireGcdMemoryLock()  **

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 140 ++++++++++++++++++++++++----------------
 1 file changed, 86 insertions(+), 54 deletions(-)

diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
index d9c65a8045..133c3dcd87 100644
--- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
+++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
@@ -1691,10 +1691,10 @@ CoreGetMemorySpaceMap (
   OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
   )
 {
-  EFI_STATUS                       Status;
   LIST_ENTRY                       *Link;
   EFI_GCD_MAP_ENTRY                *Entry;
   EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
+  UINTN                            Number;
 
   //
   // Make sure parameters are valid
@@ -1706,38 +1706,54 @@ CoreGetMemorySpaceMap (
     return EFI_INVALID_PARAMETER;
   }
 
-  CoreAcquireGcdMemoryLock ();
-
   //
   // Count the number of descriptors
   //
-  *NumberOfDescriptors = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
+  Number = 0;
+  *NumberOfDescriptors = 0;
+  *MemorySpaceMap = NULL;
+  while (TRUE) {
+    //
+    // Allocate the MemorySpaceMap
+    //
+    if (Number != 0) {
+      if (*MemorySpaceMap != NULL) {
+        FreePool (*MemorySpaceMap);
+      }
 
-  //
-  // Allocate the MemorySpaceMap
-  //
-  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
-  if (*MemorySpaceMap == NULL) {
-    Status = EFI_OUT_OF_RESOURCES;
-    goto Done;
-  }
+      *MemorySpaceMap = AllocatePool (Number * sizeof(EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
+      if (*MemorySpaceMap == NULL) {
+        return EFI_OUT_OF_RESOURCES;
+      }
 
-  //
-  // Fill in the MemorySpaceMap
-  //
-  Descriptor = *MemorySpaceMap;
-  Link = mGcdMemorySpaceMap.ForwardLink;
-  while (Link != &mGcdMemorySpaceMap) {
-    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
-    BuildMemoryDescriptor (Descriptor, Entry);
-    Descriptor++;
-    Link = Link->ForwardLink;
+      *NumberOfDescriptors = Number;
+    }
+
+    CoreAcquireGcdMemoryLock ();
+
+    Number = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
+    if (Number == *NumberOfDescriptors) {
+      //
+      // Fill in the MemorySpaceMap
+      //
+      Descriptor = *MemorySpaceMap;
+      Link = mGcdMemorySpaceMap.ForwardLink;
+      while (Link != &mGcdMemorySpaceMap && Number != 0) {
+        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
+        BuildMemoryDescriptor (Descriptor, Entry);
+        Descriptor++;
+        Number--;
+        Link = Link->ForwardLink;
+      }
+
+      CoreReleaseGcdMemoryLock ();
+      break;
+    }
+
+    CoreReleaseGcdMemoryLock ();
   }
-  Status = EFI_SUCCESS;
 
-Done:
-  CoreReleaseGcdMemoryLock ();
-  return Status;
+  return EFI_SUCCESS;
 }
 
 
@@ -1964,10 +1980,10 @@ CoreGetIoSpaceMap (
   OUT EFI_GCD_IO_SPACE_DESCRIPTOR  **IoSpaceMap
   )
 {
-  EFI_STATUS                   Status;
   LIST_ENTRY                   *Link;
   EFI_GCD_MAP_ENTRY            *Entry;
   EFI_GCD_IO_SPACE_DESCRIPTOR  *Descriptor;
+  UINTN                        Number;
 
   //
   // Make sure parameters are valid
@@ -1979,38 +1995,54 @@ CoreGetIoSpaceMap (
     return EFI_INVALID_PARAMETER;
   }
 
-  CoreAcquireGcdIoLock ();
+  Number = 0;
+  *NumberOfDescriptors = 0;
+  *IoSpaceMap = NULL;
+  while (TRUE) {
+    //
+    // Allocate the IoSpaceMap
+    //
+    if (Number != 0) {
+      if (*IoSpaceMap != NULL) {
+        FreePool (*IoSpaceMap);
+      }
 
-  //
-  // Count the number of descriptors
-  //
-  *NumberOfDescriptors = CoreCountGcdMapEntry (&mGcdIoSpaceMap);
+      *IoSpaceMap = AllocatePool (Number * sizeof(EFI_GCD_IO_SPACE_DESCRIPTOR));
+      if (*IoSpaceMap == NULL) {
+        return EFI_OUT_OF_RESOURCES;
+      }
 
-  //
-  // Allocate the IoSpaceMap
-  //
-  *IoSpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_IO_SPACE_DESCRIPTOR));
-  if (*IoSpaceMap == NULL) {
-    Status = EFI_OUT_OF_RESOURCES;
-    goto Done;
-  }
+      *NumberOfDescriptors = Number;
+    }
 
-  //
-  // Fill in the IoSpaceMap
-  //
-  Descriptor = *IoSpaceMap;
-  Link = mGcdIoSpaceMap.ForwardLink;
-  while (Link != &mGcdIoSpaceMap) {
-    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
-    BuildIoDescriptor (Descriptor, Entry);
-    Descriptor++;
-    Link = Link->ForwardLink;
+    CoreAcquireGcdIoLock ();
+
+    //
+    // Count the number of descriptors
+    //
+    Number = CoreCountGcdMapEntry (&mGcdIoSpaceMap);
+    if (Number == *NumberOfDescriptors) {
+      //
+      // Fill in the IoSpaceMap
+      //
+      Descriptor = *IoSpaceMap;
+      Link = mGcdIoSpaceMap.ForwardLink;
+      while (Link != &mGcdIoSpaceMap && Number != 0) {
+        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
+        BuildIoDescriptor (Descriptor, Entry);
+        Descriptor++;
+        Number--;
+        Link = Link->ForwardLink;
+      }
+
+      CoreReleaseGcdIoLock ();
+      break;
+    }
+
+    CoreReleaseGcdIoLock ();
   }
-  Status = EFI_SUCCESS;
 
-Done:
-  CoreReleaseGcdIoLock ();
-  return Status;
+  return EFI_SUCCESS;
 }
 
 
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 4/5] MdeModulePkg/Core: add freed-memory guard feature
  2018-10-23 14:53 [PATCH v2 0/5] Add freed-memory guard feature Jian J Wang
                   ` (2 preceding siblings ...)
  2018-10-23 14:53 ` [PATCH v2 3/5] MdeModulePkg/Core: fix a lock issue in GCD memory map dump Jian J Wang
@ 2018-10-23 14:53 ` Jian J Wang
  2018-10-23 18:29   ` Laszlo Ersek
  2018-10-23 14:53 ` [PATCH v2 5/5] MdeModulePkg/Core: fix-up for changes introduced by freed-memory guard Jian J Wang
  4 siblings, 1 reply; 12+ messages in thread
From: Jian J Wang @ 2018-10-23 14:53 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v2 changes:
> a. Change prototype and implementation of IsHeapGuardEnabled()
>    to allow it to check freed-memory guard feature.
> b. Drop IsUafEnabled() because of a.
> c. Move the sanity check of freed-memory guard and heap guard
>    into HeapGuardCpuArchProtocolNotify()
> d. Add GuardFreedPagesChecked() to avoid duplicate feature check
> e. Coding style cleanup

Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
which is illegal access to memory which has been freed. The principle
behind is similar to heap guard feature, that is we'll turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.

This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c | 409 +++++++++++++++++++++++++++++++++-
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h |  63 +++++-
 MdeModulePkg/Core/Dxe/Mem/Page.c      |  41 +++-
 MdeModulePkg/Core/Dxe/Mem/Pool.c      |  21 +-
 4 files changed, 513 insertions(+), 21 deletions(-)

diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
index 663f969c0d..5666420a6d 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
@@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
 GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
                                     = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
 
+//
+// Used for promoting freed but not used pages.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS mLastPromotedPage = BASE_4GB;
+
 /**
   Set corresponding bits in bitmap table to 1 according to the address.
 
@@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
 
   @return An integer containing the guarded memory bitmap.
 **/
-UINTN
+UINT64
 GetGuardedMemoryBits (
   IN EFI_PHYSICAL_ADDRESS    Address,
   IN UINTN                   NumberOfPages
@@ -387,7 +392,7 @@ GetGuardedMemoryBits (
 {
   UINT64            *BitMap;
   UINTN             Bits;
-  UINTN             Result;
+  UINT64            Result;
   UINTN             Shift;
   UINTN             BitsToUnitEnd;
 
@@ -660,15 +665,16 @@ IsPageTypeToGuard (
 /**
   Check to see if the heap guard is enabled for page and/or pool allocation.
 
+  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
+
   @return TRUE/FALSE.
 **/
 BOOLEAN
 IsHeapGuardEnabled (
-  VOID
+  UINT8           GuardType
   )
 {
-  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
-                              GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
+  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages, GuardType);
 }
 
 /**
@@ -1203,6 +1209,380 @@ SetAllGuardPages (
   }
 }
 
+/**
+  Find the address of top-most guarded free page.
+
+  @param[out]  Address    Start address of top-most guarded free page.
+
+  @return VOID.
+**/
+VOID
+GetLastGuardedFreePageAddress (
+  OUT EFI_PHYSICAL_ADDRESS      *Address
+  )
+{
+  EFI_PHYSICAL_ADDRESS    AddressGranularity;
+  EFI_PHYSICAL_ADDRESS    BaseAddress;
+  UINTN                   Level;
+  UINT64                  Map;
+  INTN                    Index;
+
+  ASSERT (mMapLevel >= 1);
+
+  BaseAddress = 0;
+  Map = mGuardedMemoryMap;
+  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
+       ++Level) {
+    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
+
+    //
+    // Find the non-NULL entry at largest index.
+    //
+    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
+      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
+        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
+        Map = ((UINT64 *)(UINTN)Map)[Index];
+        break;
+      }
+    }
+  }
+
+  //
+  // Find the non-zero MSB then get the page address.
+  //
+  while (Map != 0) {
+    Map = RShiftU64 (Map, 1);
+    BaseAddress += EFI_PAGES_TO_SIZE (1);
+  }
+
+  *Address = BaseAddress;
+}
+
+/**
+  Record freed pages.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+MarkFreedPages (
+  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
+  IN UINTN                    Pages
+  )
+{
+  SetGuardedMemoryBits (BaseAddress, Pages);
+}
+
+/**
+  Record freed pages as well as mark them as not-present.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPages (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  )
+{
+  EFI_STATUS      Status;
+
+  //
+  // Legacy memory lower than 1MB might be accessed with no allocation. Leave
+  // them alone.
+  //
+  if (BaseAddress < BASE_1MB) {
+    return;
+  }
+
+  MarkFreedPages (BaseAddress, Pages);
+  if (gCpu != NULL) {
+    //
+    // Set flag to make sure allocating memory without GUARD for page table
+    // operation; otherwise infinite loops could be caused.
+    //
+    mOnGuarding = TRUE;
+    //
+    // Note: This might overwrite other attributes needed by other features,
+    // such as NX memory protection.
+    //
+    Status = gCpu->SetMemoryAttributes (
+                     gCpu,
+                     BaseAddress,
+                     EFI_PAGES_TO_SIZE (Pages),
+                     EFI_MEMORY_RP
+                     );
+    //
+    // Normally we should ASSERT the returned Status. But there might be memory
+    // alloc/free involved in SetMemoryAttributes(), which might fail this
+    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
+    //
+    if (EFI_ERROR (Status)) {
+      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%ld)\n", BaseAddress, (UINT64)Pages));
+    }
+    mOnGuarding = FALSE;
+  }
+}
+
+/**
+  Record freed pages as well as mark them as not-present, if enabled.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPagesChecked (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  )
+{
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    GuardFreedPages (BaseAddress, Pages);
+  }
+}
+
+/**
+  Mark all pages freed before CPU Arch Protocol as not-present.
+
+**/
+VOID
+GuardAllFreedPages (
+  VOID
+  )
+{
+  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    TableEntry;
+  UINT64    Address;
+  UINT64    GuardPage;
+  INTN      Level;
+  UINTN     BitIndex;
+  UINTN     GuardPageNumber;
+
+  if (mGuardedMemoryMap == 0 ||
+      mMapLevel == 0 ||
+      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
+    return;
+  }
+
+  CopyMem (Entries, mLevelMask, sizeof (Entries));
+  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
+
+  SetMem (Tables, sizeof(Tables), 0);
+  SetMem (Addresses, sizeof(Addresses), 0);
+  SetMem (Indices, sizeof(Indices), 0);
+
+  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+  Tables[Level]   = mGuardedMemoryMap;
+  Address         = 0;
+  GuardPage       = (UINT64)-1;
+  GuardPageNumber = 0;
+
+  while (TRUE) {
+    if (Indices[Level] > Entries[Level]) {
+      Tables[Level] = 0;
+      Level        -= 1;
+    } else {
+      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
+      Address     = Addresses[Level];
+
+      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+        Level            += 1;
+        Tables[Level]     = TableEntry;
+        Addresses[Level]  = Address;
+        Indices[Level]    = 0;
+
+        continue;
+      } else {
+        BitIndex = 1;
+        while (BitIndex != 0) {
+          if ((TableEntry & BitIndex) != 0) {
+            if (GuardPage == (UINT64)-1) {
+              GuardPage = Address;
+            }
+            ++GuardPageNumber;
+          } else if (GuardPageNumber > 0) {
+            GuardFreedPages (GuardPage, GuardPageNumber);
+            GuardPageNumber = 0;
+            GuardPage       = (UINT64)-1;
+          }
+
+          if (TableEntry == 0) {
+            break;
+          }
+
+          Address += EFI_PAGES_TO_SIZE (1);
+          BitIndex = LShiftU64 (BitIndex, 1);
+        }
+      }
+    }
+
+    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
+      break;
+    }
+
+    Indices[Level] += 1;
+    Address = (Level == 0) ? 0 : Addresses[Level - 1];
+    Addresses[Level] = Address | LShiftU64 (Indices[Level], Shifts[Level]);
+
+  }
+
+  //
+  // Update the maximum address of freed page which can be used for memory
+  // promotion upon out-of-memory-space.
+  //
+  GetLastGuardedFreePageAddress (&Address);
+  if (Address != 0) {
+    mLastPromotedPage = Address;
+  }
+}
+
+/**
+  This function checks to see if the given memory map descriptor in a memory map
+  can be merged with any guarded free pages.
+
+  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
+  @param  MaxAddress        Maximum address to stop the merge.
+
+  @return VOID
+
+**/
+VOID
+MergeGuardPages (
+  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
+  IN EFI_PHYSICAL_ADDRESS       MaxAddress
+  )
+{
+  EFI_PHYSICAL_ADDRESS        EndAddress;
+  UINT64                      Bitmap;
+  INTN                        Pages;
+
+  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
+      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
+    return;
+  }
+
+  Bitmap = 0;
+  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry->PhysicalStart);
+  Pages -= MemoryMapEntry->NumberOfPages;
+  while (Pages > 0) {
+    if (Bitmap == 0) {
+      EndAddress = MemoryMapEntry->PhysicalStart +
+                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
+      Bitmap = GetGuardedMemoryBits (EndAddress, GUARDED_HEAP_MAP_ENTRY_BITS);
+    }
+
+    if ((Bitmap & 1) == 0) {
+      break;
+    }
+
+    Pages--;
+    MemoryMapEntry->NumberOfPages++;
+    Bitmap = RShiftU64 (Bitmap, 1);
+  }
+}
+
+/**
+  Put part (at most 64 pages a time) guarded free pages back to free page pool.
+
+  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
+  makes use of 'Used then throw away' way to detect any illegal access to freed
+  memory. The thrown-away memory will be marked as not-present so that any access
+  to those memory (after free) will be caught by page-fault exception.
+
+  The problem is that this will consume lots of memory space. Once no memory
+  left in pool to allocate, we have to restore part of the freed pages to their
+  normal function. Otherwise the whole system will stop functioning.
+
+  @param  StartAddress    Start address of promoted memory.
+  @param  EndAddress      End address of promoted memory.
+
+  @return TRUE    Succeeded to promote memory.
+  @return FALSE   No free memory found.
+
+**/
+BOOLEAN
+PromoteGuardedFreePages (
+  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
+  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
+  )
+{
+  EFI_STATUS              Status;
+  UINTN                   AvailablePages;
+  UINT64                  Bitmap;
+  EFI_PHYSICAL_ADDRESS    Start;
+
+  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    return FALSE;
+  }
+
+  //
+  // Similar to memory allocation service, always search the freed pages in
+  // descending direction.
+  //
+  Start           = mLastPromotedPage;
+  AvailablePages  = 0;
+  while (AvailablePages == 0) {
+    Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
+    //
+    // If the address wraps around, try the really freed pages at top.
+    //
+    if (Start > mLastPromotedPage) {
+      GetLastGuardedFreePageAddress (&Start);
+      ASSERT (Start != 0);
+      Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
+    }
+
+    Bitmap = GetGuardedMemoryBits (Start, GUARDED_HEAP_MAP_ENTRY_BITS);
+    while (Bitmap > 0) {
+      if ((Bitmap & 1) != 0) {
+        ++AvailablePages;
+      } else if (AvailablePages == 0) {
+        Start += EFI_PAGES_TO_SIZE (1);
+      } else {
+        break;
+      }
+
+      Bitmap = RShiftU64 (Bitmap, 1);
+    }
+  }
+
+  if (AvailablePages) {
+    DEBUG ((DEBUG_INFO, "Promoted pages: %lX (%lx)\r\n", Start, (UINT64)AvailablePages));
+    ClearGuardedMemoryBits (Start, AvailablePages);
+
+    if (gCpu != NULL) {
+      //
+      // Set flag to make sure allocating memory without GUARD for page table
+      // operation; otherwise infinite loops could be caused.
+      //
+      mOnGuarding = TRUE;
+      Status = gCpu->SetMemoryAttributes (gCpu, Start, EFI_PAGES_TO_SIZE(AvailablePages), 0);
+      ASSERT_EFI_ERROR (Status);
+      mOnGuarding = FALSE;
+    }
+
+    mLastPromotedPage = Start;
+    *StartAddress     = Start;
+    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
+    return TRUE;
+  }
+
+  return FALSE;
+}
+
 /**
   Notify function used to set all Guard pages before CPU Arch Protocol installed.
 **/
@@ -1212,7 +1592,20 @@ HeapGuardCpuArchProtocolNotify (
   )
 {
   ASSERT (gCpu != NULL);
-  SetAllGuardPages ();
+
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL) &&
+      IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    DEBUG ((DEBUG_ERROR, "Heap guard and freed memory guard cannot be enabled at the same time.\n"));
+    CpuDeadLoop ();
+  }
+
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
+    SetAllGuardPages ();
+  }
+
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    GuardAllFreedPages ();
+  }
 }
 
 /**
@@ -1264,6 +1657,10 @@ DumpGuardedMemoryBitmap (
   CHAR8     *Ruler1;
   CHAR8     *Ruler2;
 
+  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_ALL)) {
+    return;
+  }
+
   if (mGuardedMemoryMap == 0 ||
       mMapLevel == 0 ||
       mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
index 8c34692439..4abcf3a77d 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
@@ -160,6 +160,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 //
 #define GUARD_HEAP_TYPE_PAGE        BIT0
 #define GUARD_HEAP_TYPE_POOL        BIT1
+#define GUARD_HEAP_TYPE_FREED       BIT4
+#define GUARD_HEAP_TYPE_ALL         \
+        (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_FREED)
 
 //
 // Debug message level
@@ -392,11 +395,13 @@ AdjustPoolHeadF (
 /**
   Check to see if the heap guard is enabled for page and/or pool allocation.
 
+  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
+
   @return TRUE/FALSE.
 **/
 BOOLEAN
 IsHeapGuardEnabled (
-  VOID
+  UINT8           GuardType
   );
 
 /**
@@ -407,6 +412,62 @@ HeapGuardCpuArchProtocolNotify (
   VOID
   );
 
+/**
+  This function checks to see if the given memory map descriptor in a memory map
+  can be merged with any guarded free pages.
+
+  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
+  @param  MaxAddress        Maximum address to stop the merge.
+
+  @return VOID
+
+**/
+VOID
+MergeGuardPages (
+  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
+  IN EFI_PHYSICAL_ADDRESS       MaxAddress
+  );
+
+/**
+  Record freed pages as well as mark them as not-present, if enabled.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPagesChecked (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  );
+
+/**
+  Put part (at most 64 pages a time) guarded free pages back to free page pool.
+
+  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
+  makes use of 'Used then throw away' way to detect any illegal access to freed
+  memory. The thrown-away memory will be marked as not-present so that any access
+  to those memory (after free) will be caught by page-fault exception.
+
+  The problem is that this will consume lots of memory space. Once no memory
+  left in pool to allocate, we have to restore part of the freed pages to their
+  normal function. Otherwise the whole system will stop functioning.
+
+  @param  StartAddress    Start address of promoted memory.
+  @param  EndAddress      End address of promoted memory.
+
+  @return TRUE    Succeeded to promote memory.
+  @return FALSE   No free memory found.
+
+**/
+BOOLEAN
+PromoteGuardedFreePages (
+  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
+  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
+  );
+
 extern BOOLEAN mOnGuarding;
 
 #endif
diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c
index 3b4cc08e7c..9d9abcf565 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Page.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
@@ -401,9 +401,11 @@ PromoteMemoryResource (
   VOID
   )
 {
-  LIST_ENTRY         *Link;
-  EFI_GCD_MAP_ENTRY  *Entry;
-  BOOLEAN            Promoted;
+  LIST_ENTRY            *Link;
+  EFI_GCD_MAP_ENTRY     *Entry;
+  BOOLEAN               Promoted;
+  EFI_PHYSICAL_ADDRESS  StartAddress;
+  EFI_PHYSICAL_ADDRESS  EndAddress;
 
   DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
 
@@ -451,6 +453,24 @@ PromoteMemoryResource (
 
   CoreReleaseGcdMemoryLock ();
 
+  if (!Promoted) {
+    //
+    // If freed-memory guard is enabled, we could promote pages from
+    // guarded free pages.
+    //
+    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
+    if (Promoted) {
+      CoreAcquireGcdMemoryLock ();
+      CoreAddRange (
+        EfiConventionalMemory,
+        StartAddress,
+        EndAddress,
+        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT | EFI_MEMORY_WB
+        );
+      CoreReleaseGcdMemoryLock ();
+    }
+  }
+
   return Promoted;
 }
 /**
@@ -896,9 +916,15 @@ CoreConvertPagesEx (
     }
 
     //
-    // Add our new range in
+    // Add our new range in. Don't do this for freed pages if freed-memory
+    // guard is enabled.
     //
-    CoreAddRange (MemType, Start, RangeEnd, Attribute);
+    if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
+        !ChangingType ||
+        MemType != EfiConventionalMemory) {
+      CoreAddRange (MemType, Start, RangeEnd, Attribute);
+    }
+
     if (ChangingType && (MemType == EfiConventionalMemory)) {
       //
       // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
@@ -1514,6 +1540,7 @@ CoreFreePages (
 
   Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
   if (!EFI_ERROR (Status)) {
+    GuardFreedPagesChecked (Memory, NumberOfPages);
     CoreUpdateProfile (
       (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
       MemoryProfileActionFreePages,
@@ -1908,9 +1935,7 @@ Done:
   *MemoryMapSize = BufferSize;
 
   DEBUG_CODE (
-    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
-      DumpGuardedMemoryBitmap ();
-    }
+    DumpGuardedMemoryBitmap ();
   );
 
   return Status;
diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c
index 1ff2061f7f..d47b94b95c 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
@@ -26,7 +26,8 @@ typedef struct {
 } POOL_FREE;
 
 
-#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
+#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
+#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
 typedef struct {
   UINT32          Signature;
   UINT32          Reserved;
@@ -367,6 +368,7 @@ CoreAllocatePoolI (
   UINTN       NoPages;
   UINTN       Granularity;
   BOOLEAN     HasPoolTail;
+  BOOLEAN     PageAsPool;
 
   ASSERT_LOCKED (&mPoolMemoryLock);
 
@@ -386,6 +388,7 @@ CoreAllocatePoolI (
 
   HasPoolTail  = !(NeedGuard &&
                    ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+  PageAsPool = (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) && !mOnGuarding);
 
   //
   // Adjusting the Size to be of proper alignment so that
@@ -406,7 +409,7 @@ CoreAllocatePoolI (
   // If allocation is over max size, just allocate pages for the request
   // (slow)
   //
-  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
+  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
     if (!HasPoolTail) {
       Size -= sizeof (POOL_TAIL);
     }
@@ -498,7 +501,7 @@ Done:
     //
     // If we have a pool buffer, fill in the header & tail info
     //
-    Head->Signature = POOL_HEAD_SIGNATURE;
+    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE : POOL_HEAD_SIGNATURE;
     Head->Size      = Size;
     Head->Type      = (EFI_MEMORY_TYPE) PoolType;
     Buffer          = Head->Data;
@@ -615,6 +618,7 @@ CoreFreePoolPagesI (
   CoreFreePoolPages (Memory, NoPages);
   CoreReleaseMemoryLock ();
 
+  GuardFreedPagesChecked (Memory, NoPages);
   ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
     (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
 }
@@ -685,15 +689,19 @@ CoreFreePoolI (
   UINTN       Granularity;
   BOOLEAN     IsGuarded;
   BOOLEAN     HasPoolTail;
+  BOOLEAN     PageAsPool;
 
   ASSERT(Buffer != NULL);
   //
   // Get the head & tail of the pool entry
   //
-  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
+  Head = BASE_CR (Buffer, POOL_HEAD, Data);
   ASSERT(Head != NULL);
 
-  if (Head->Signature != POOL_HEAD_SIGNATURE) {
+  if (Head->Signature != POOL_HEAD_SIGNATURE &&
+      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
+    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
+            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
     return EFI_INVALID_PARAMETER;
   }
 
@@ -701,6 +709,7 @@ CoreFreePoolI (
                 IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
   HasPoolTail = !(IsGuarded &&
                   ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
 
   if (HasPoolTail) {
     Tail = HEAD_TO_TAIL (Head);
@@ -757,7 +766,7 @@ CoreFreePoolI (
   //
   // If it's not on the list, it must be pool pages
   //
-  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
+  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
 
     //
     // Return the memory pages back to free memory
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 5/5] MdeModulePkg/Core: fix-up for changes introduced by freed-memory guard
  2018-10-23 14:53 [PATCH v2 0/5] Add freed-memory guard feature Jian J Wang
                   ` (3 preceding siblings ...)
  2018-10-23 14:53 ` [PATCH v2 4/5] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
@ 2018-10-23 14:53 ` Jian J Wang
  2018-10-23 17:16   ` Laszlo Ersek
  4 siblings, 1 reply; 12+ messages in thread
From: Jian J Wang @ 2018-10-23 14:53 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v2 changes:
> a. Update IsHeapGuardEnabled() calling because its prototype change

Two changes are fixed up:
a. Prototype change of function IsHeapGuardEnabled()
b. More memory map descriptors are introduced by new feature.
   MergeMemoryMap() is updated to merge freed-pages into adjacent memory
   descriptor to reduce the overall descriptor number.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |  2 +-
 MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  | 16 +++++++---------
 2 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
index fa8f8fe91a..6298b67db1 100644
--- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
+++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
@@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
   // Don't overwrite Guard pages, which should be the first and/or last page,
   // if any.
   //
-  if (IsHeapGuardEnabled ()) {
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
     if (IsGuardPage (Memory))  {
       Memory += EFI_PAGE_SIZE;
       Length -= EFI_PAGE_SIZE;
diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
index 05eb4f422b..f6595c90ed 100644
--- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
+++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
@@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 #include <Guid/PropertiesTable.h>
 
 #include "DxeMain.h"
+#include "HeapGuard.h"
 
 #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
   ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
@@ -205,16 +206,13 @@ MergeMemoryMap (
     NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
 
     do {
-      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry->NumberOfPages));
+      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry->PhysicalStart);
+      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry->NumberOfPages));
       if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
-          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
-          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
-          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
-        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
-        if (NewMemoryMapEntry != MemoryMapEntry) {
-          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
-        }
-
+          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
+          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
+          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
+        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
         NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (NextMemoryMapEntry, DescriptorSize);
         continue;
       } else {
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/5] MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature
  2018-10-23 14:53 ` [PATCH v2 1/5] MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature Jian J Wang
@ 2018-10-23 16:09   ` Laszlo Ersek
  2018-10-24  0:45     ` Wang, Jian J
  0 siblings, 1 reply; 12+ messages in thread
From: Laszlo Ersek @ 2018-10-23 16:09 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Star Zeng

On 10/23/18 16:53, Jian J Wang wrote:
>> v2 changes:
>> a. Drop PCD PcdUseAfterFreeDetectionPropertyMask. Use BIT4 of
>>    PcdHeapGuardPropertyMask instead. Update related descriptions in both
>>    dec and uni files.
>
> Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
> which is illegal access to memory which has been freed. The principle
> behind is similar to heap guard feature, that is we'll turn all pool
> memory allocation to page allocation and mark them to be not-present
> once they are freed.
>
> This also implies that, once a page is allocated and freed, it cannot
> be re-allocated. This will bring another issue, which is that there's
> risk that memory space will be used out. To address it, this patch
> series add logic put part (at most 64 pages a time) of freed pages
> back into page pool, so that the memory service can still have memory
> to allocate, when all memory space have been allocated once. This is
> called memory promotion. The promoted pages are always from the eldest
> pages freed.
>
> BIT4 of following PCD is used to enable the freed-memory guard feature.
>
>   gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
>
> Please note this feature cannot be enabled with heap guard at the same
> time.
>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  MdeModulePkg/MdeModulePkg.dec | 10 ++++++++++
>  MdeModulePkg/MdeModulePkg.uni |  6 +++++-
>  2 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
> index 6037504fa7..255b92ea67 100644
> --- a/MdeModulePkg/MdeModulePkg.dec
> +++ b/MdeModulePkg/MdeModulePkg.dec
> @@ -955,6 +955,8 @@
>    # free pages for all of them. The page allocation for the type related to
>    # cleared bits keeps the same as ususal.
>    #
> +  # This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.
> +  #
>    # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
>    #  EfiReservedMemoryType             0x0000000000000001<BR>
>    #  EfiLoaderCode                     0x0000000000000002<BR>
> @@ -984,6 +986,8 @@
>    # if there's enough free memory for all of them. The pool allocation for the
>    # type related to cleared bits keeps the same as ususal.
>    #
> +  # This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.
> +  #
>    # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
>    #  EfiReservedMemoryType             0x0000000000000001<BR>
>    #  EfiLoaderCode                     0x0000000000000002<BR>
> @@ -1007,14 +1011,20 @@
>    gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType|0x0|UINT64|0x30001053
>
>    ## This mask is to control Heap Guard behavior.
> +  #
>    # Note that due to the limit of pool memory implementation and the alignment
>    # requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee
>    # that the returned pool is exactly adjacent to head guard page or tail guard
>    # page.

(1) The above changes are not related to the use-after-free feature;
they should go into a separate cleanup patch. (Which is very welcome
otherwise.) The cleanup patch should be patch #1 in the series.

The subject should be, for example:

  MdeModulePkg: clean up Heap Guard PageType / PoolType PCD documentation

(71 characters)

> +  #
> +  # Note that UEFI freed-memory guard and pool/page guard cannot be enabled
> +  # at the same time.
> +  #
>    #   BIT0 - Enable UEFI page guard.<BR>
>    #   BIT1 - Enable UEFI pool guard.<BR>
>    #   BIT2 - Enable SMM page guard.<BR>
>    #   BIT3 - Enable SMM pool guard.<BR>
> +  #   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>
>    #   BIT6 - Enable non-stop mode.<BR>
>    #   BIT7 - The direction of Guard Page for Pool Guard.
>    #          0 - The returned pool is near the tail guard page.<BR>

(2) This should go into patch #2 in the series. However, the current
subject line is useless:

  MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature

Instead, I suggest:

  MdeModulePkg: introduce UEFI Use After Free feature bit in Heap Guard PCD

(73 characters).

> diff --git a/MdeModulePkg/MdeModulePkg.uni b/MdeModulePkg/MdeModulePkg.uni
> index a6bcb627cf..e72b893509 100644
> --- a/MdeModulePkg/MdeModulePkg.uni
> +++ b/MdeModulePkg/MdeModulePkg.uni
> @@ -1171,6 +1171,7 @@
>                                                                                          " before and after corresponding type of pages allocated if there's enough\n"
>                                                                                          " free pages for all of them. The page allocation for the type related to\n"
>                                                                                          " cleared bits keeps the same as ususal.\n\n"
> +                                                                                        " This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.\n\n"
>                                                                                          " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
>                                                                                          "  EfiReservedMemoryType             0x0000000000000001\n"
>                                                                                          "  EfiLoaderCode                     0x0000000000000002\n"
> @@ -1198,6 +1199,7 @@
>                                                                                          " before and after corresponding type of pages which the allocated pool occupies,\n"
>                                                                                          " if there's enough free memory for all of them. The pool allocation for the\n"
>                                                                                          " type related to cleared bits keeps the same as ususal.\n\n"
> +                                                                                        " This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.\n\n"
>                                                                                          " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
>                                                                                          "  EfiReservedMemoryType             0x0000000000000001\n"
>                                                                                          "  EfiLoaderCode                     0x0000000000000002\n"

(3) Same as (1).

> @@ -1225,11 +1227,13 @@
>                                                                                              "Note that due to the limit of pool memory implementation and the alignment\n"
>                                                                                              "requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee\n"
>                                                                                              "that the returned pool is exactly adjacent to head guard page or tail guard\n"
> -                                                                                            "page.\n"
> +                                                                                            "page.\n\n"
> +                                                                                            "Note that UEFI freed-memory guard and pool/page guard cannot be enabled at the same time.\n\n"
>                                                                                              "   BIT0 - Enable UEFI page guard.<BR>\n"
>                                                                                              "   BIT1 - Enable UEFI pool guard.<BR>\n"
>                                                                                              "   BIT2 - Enable SMM page guard.<BR>\n"
>                                                                                              "   BIT3 - Enable SMM pool guard.<BR>\n"
> +                                                                                            "   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>\n"
>                                                                                              "   BIT7 - The direction of Guard Page for Pool Guard.\n"
>                                                                                              "          0 - The returned pool is near the tail guard page.<BR>\n"
>                                                                                              "          1 - The returned pool is near the head guard page.<BR>"
>

(4) Same as (2).

With those changes:

Reviewed-by: Laszlo Ersek <lersek@redhat.com>

Thanks
Laszlo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/5] UefiCpuPkg/CpuDxe: fix an infinite loop issue
  2018-10-23 14:53 ` [PATCH v2 2/5] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
@ 2018-10-23 16:41   ` Laszlo Ersek
  0 siblings, 0 replies; 12+ messages in thread
From: Laszlo Ersek @ 2018-10-23 16:41 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: Michael D Kinney, Jiewen Yao, Ruiyu Ni, Star Zeng

On 10/23/18 16:53, Jian J Wang wrote:
>> v2 changes:
>> a. Change the type of mPageTablePoolLock to be BOOLEAN. Related code
>>    logic is also updated and refined.
>> b. Add non-stop mode for freed-memory guard feature
>
> The freed-memory guard feature will cause an infinite calling
> of InitializePageTablePool().

(1) An important statement is missing from the commit message. Namely,
under the UAF guard feature, FreePages() will modify page attributes.

Similarly, the subject line is mostly useless. First, we're not "fixing"
an issue: the issue doesn't exist yet. We're preventing the issue before
the next patches could introduce it. Second, "infinite loop" is way too
generic. Here's a suggestion:

  UefiCpuPkg/CpuDxe: prevent inf. recursion from FreePages->SetMemoryAttributes

(77 characters. A bit longer than I'd like (which is 74), but given the
long function names, it's hard to compress down the subject better.

Also, I don't recommend "inf. loop", we have recursion here, not a plain
loop.)


> This is due to a fact that
> AllocateAlignedPages() is used to allocate page table pool memory.
> This function will most likely call gBS->FreePages to free unaligned
> pages and then cause another round of page attributes change, like
> below
>
>    FreePages() <===============|
> => SetMemoryAttributes()       |
> => <if out of page table>      |
> => InitializePageTablePool()   |
> => AllocateAlignedPages()      |
> => FreePages() ================|
>
> The solution is add a global variable as a lock in page table pool
> allocation function and fail any other requests if it has not been
> done.

(2) Please document what the practical consequences are when the global
variable prevents the infinite recursion. As far as I can see, we are
going to propagate various error codes as far as possible outwards.
Questions:

- Will this error reach the caller (for example, a 3rd party UEFI
driver) if it checks the return status of gBS->FreePages()?

- What is the consequence for the UAF guard? Is it only that the freed
pages will not be marked inaccessible, or something more serious (like
some inconsistency in internal platform firmware structures)?


> This patch also add non-stop mode for freed-memory guard.

(3) Please isolate the update to the non-stop macro
(HEAP_GUARD_NONSTOP_MODE) to a separate patch.

- Non-stop mode was introduced / explained in commit 8f2613628acf
("MdeModulePkg/MdeModulePkg.dec: add new settings for PCDs",
2018-08-30).

- The macro HEAP_GUARD_NONSTOP_MODE was added to CpuDxe in commit
dcc026217fdc ("UefiCpuPkg/CpuDxe: implement non-stop mode for uefi",
2018-08-30).

- Another instance of the macro HEAP_GUARD_NONSTOP_MODE was added to
PiSmmCpuDxeSmm -- with BIT1|BIT0 replaced with BIT3|BIT2 -- in commit
09afd9a42a7f ("UefiCpuPkg/PiSmmCpuDxeSmm: implement non-stop mode for
SMM", 2018-08-30)

- According to the first patch in the present series, the UAF guard is
UEFI-only. (Well, "DXE-only" would be more precise; the point is, it
does not cover SMM.)

Thus, the fact that we don't modify the macro definition in
PiSmmCpuDxeSmm, and also *why* we don't do that, should be mentioned
explicitly in the commit message of the patch that modifies
HEAP_GUARD_NONSTOP_MODE in "UefiCpuPkg/CpuDxe/CpuDxe.h":

  UefiCpuPkg/CpuDxe: consider Use After Free guard in non-stop mode

(65 characters)


> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  UefiCpuPkg/CpuDxe/CpuDxe.h       |  2 +-
>  UefiCpuPkg/CpuDxe/CpuPageTable.c | 19 +++++++++++++++++--
>  2 files changed, 18 insertions(+), 3 deletions(-)
>
> diff --git a/UefiCpuPkg/CpuDxe/CpuDxe.h b/UefiCpuPkg/CpuDxe/CpuDxe.h
> index 064ea05bba..3183a3f7f4 100644
> --- a/UefiCpuPkg/CpuDxe/CpuDxe.h
> +++ b/UefiCpuPkg/CpuDxe/CpuDxe.h
> @@ -58,7 +58,7 @@
>                                         )
>
>  #define HEAP_GUARD_NONSTOP_MODE       \
> -        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT1|BIT0)) > BIT6)
> +        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT4|BIT1|BIT0)) > BIT6)
>
>  #define NULL_DETECTION_NONSTOP_MODE   \
>          ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & (BIT6|BIT0)) > BIT6)
> diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTable.c
> index 33e8ee2d2c..b7beaf935b 100644
> --- a/UefiCpuPkg/CpuDxe/CpuPageTable.c
> +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c
> @@ -100,6 +100,7 @@ PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
>  };
>
>  PAGE_TABLE_POOL                   *mPageTablePool = NULL;
> +BOOLEAN                           mPageTablePoolLock = FALSE;
>  PAGE_TABLE_LIB_PAGING_CONTEXT     mPagingContext;
>  EFI_SMM_BASE2_PROTOCOL            *mSmmBase2 = NULL;
>
> @@ -1046,6 +1047,16 @@ InitializePageTablePool (
>    VOID                      *Buffer;
>    BOOLEAN                   IsModified;
>
> +  //
> +  // Do not allow re-entrance.
> +  //
> +  if (mPageTablePoolLock) {
> +    return FALSE;
> +  }
> +
> +  mPageTablePoolLock = TRUE;
> +  IsModified = FALSE;
> +
>    //
>    // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one page for
>    // header.
> @@ -1056,7 +1067,9 @@ InitializePageTablePool (
>    Buffer = AllocateAlignedPages (PoolPages, PAGE_TABLE_POOL_ALIGNMENT);
>    if (Buffer == NULL) {
>      DEBUG ((DEBUG_ERROR, "ERROR: Out of aligned pages\r\n"));
> -    return FALSE;
> +    goto Done;
> +  } else {
> +    DEBUG ((DEBUG_INFO, "Paging: added %ld pages to page table pool\r\n", (UINT64)PoolPages));
>    }

(4) Please don't add an "else" branch right after a "return" or "goto".
(This is a very annoying, and regrettably frequent, anti-pattern in
edk2.)

(5) Please print UINT64 values with "%lu", not "%ld".

(6) I think this message should be logged at the DEBUG_VERBOSE level. I
don't insist, but please consider it at least.


>    //
> @@ -1092,7 +1105,9 @@ InitializePageTablePool (
>      );
>    ASSERT (IsModified == TRUE);
>
> -  return TRUE;
> +Done:
> +  mPageTablePoolLock = FALSE;
> +  return IsModified;
>  }
>
>  /**
>

Looks OK to me otherwise.

Thanks,
Laszlo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 5/5] MdeModulePkg/Core: fix-up for changes introduced by freed-memory guard
  2018-10-23 14:53 ` [PATCH v2 5/5] MdeModulePkg/Core: fix-up for changes introduced by freed-memory guard Jian J Wang
@ 2018-10-23 17:16   ` Laszlo Ersek
  0 siblings, 0 replies; 12+ messages in thread
From: Laszlo Ersek @ 2018-10-23 17:16 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Star Zeng

On 10/23/18 16:53, Jian J Wang wrote:
>> v2 changes:
>> a. Update IsHeapGuardEnabled() calling because its prototype change
> 
> Two changes are fixed up:
> a. Prototype change of function IsHeapGuardEnabled()

This kind of separation, between the patches, is wrong then. If someone
bisects the edk2 git history, and checks out the edk2 tree at patch #4
in this series, they will get a build failure.

> b. More memory map descriptors are introduced by new feature.
>    MergeMemoryMap() is updated to merge freed-pages into adjacent memory
>    descriptor to reduce the overall descriptor number.

In such cases, the usual way to structure the patch series is as follows:

- First the patch is added that makes the dependent / consumer code more
generic, so that it can later cope with the new feature. Right after
this "prep" patch, the new code paths in the consumer code are not
exercised in practice. Importantly however, there is neither a
compilation failure, nor a functionality error. It's just that the new
additions are not active yet; they work as NOPs.

- Second, the patch with the new feature is added; it basically "snaps
in place", and activates the code paths that were prepared earlier.

In large patch sets, it's not uncommon to see 5+ "prep" patches, each
equipping a separate aspect (or consumer site) for the new feature,
gradually. And then the final feature patch is plugged in.

Thanks
Laszlo

> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |  2 +-
>  MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  | 16 +++++++---------
>  2 files changed, 8 insertions(+), 10 deletions(-)
> 
> diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> index fa8f8fe91a..6298b67db1 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> @@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
>    // Don't overwrite Guard pages, which should be the first and/or last page,
>    // if any.
>    //
> -  if (IsHeapGuardEnabled ()) {
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
>      if (IsGuardPage (Memory))  {
>        Memory += EFI_PAGE_SIZE;
>        Length -= EFI_PAGE_SIZE;
> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> index 05eb4f422b..f6595c90ed 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>  #include <Guid/PropertiesTable.h>
>  
>  #include "DxeMain.h"
> +#include "HeapGuard.h"
>  
>  #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
>    ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> @@ -205,16 +206,13 @@ MergeMemoryMap (
>      NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
>  
>      do {
> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry->NumberOfPages));
> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry->PhysicalStart);
> +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry->NumberOfPages));
>        if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        if (NewMemoryMapEntry != MemoryMapEntry) {
> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        }
> -
> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
>          NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (NextMemoryMapEntry, DescriptorSize);
>          continue;
>        } else {
> 



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/5] MdeModulePkg/Core: fix a lock issue in GCD memory map dump
  2018-10-23 14:53 ` [PATCH v2 3/5] MdeModulePkg/Core: fix a lock issue in GCD memory map dump Jian J Wang
@ 2018-10-23 18:26   ` Laszlo Ersek
  0 siblings, 0 replies; 12+ messages in thread
From: Laszlo Ersek @ 2018-10-23 18:26 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Star Zeng

On 10/23/18 16:53, Jian J Wang wrote:
>> v2 changes:
>> a. Update implementation of CoreGetMemorySpaceMap() and
>>    CoreGetIoSpaceMap() to avoid lock failure. Drop the code to
>>    detect debug print level used to achieve the same effect.
>
> This issue is hidden in current code but exposed by introduction
> of freed-memory guard feature due to the fact that the feature
> will turn all pool allocation to page allocation.
>
> The solution is move the memory allocation in CoreGetMemorySpaceMap()
> and CoreGetIoSpaceMap() to be out of the GCD memory map lock.
>
>    CoreDumpGcdMemorySpaceMap()
> => CoreGetMemorySpaceMap()
> => CoreAcquireGcdMemoryLock () *
>    AllocatePool()
> => InternalAllocatePool()
> => CoreAllocatePool()
> => CoreAllocatePoolI()
> => CoreAllocatePoolPagesI()
> => CoreAllocatePoolPages()
> => FindFreePages()
> => PromoteMemoryResource()
> => CoreAcquireGcdMemoryLock()  **
>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 140 ++++++++++++++++++++++++----------------
>  1 file changed, 86 insertions(+), 54 deletions(-)
>
> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> index d9c65a8045..133c3dcd87 100644
> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> @@ -1691,10 +1691,10 @@ CoreGetMemorySpaceMap (
>    OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
>    )
>  {
> -  EFI_STATUS                       Status;
>    LIST_ENTRY                       *Link;
>    EFI_GCD_MAP_ENTRY                *Entry;
>    EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
> +  UINTN                            Number;
>
>    //
>    // Make sure parameters are valid
> @@ -1706,38 +1706,54 @@ CoreGetMemorySpaceMap (
>      return EFI_INVALID_PARAMETER;
>    }
>
> -  CoreAcquireGcdMemoryLock ();
> -
>    //
>    // Count the number of descriptors
>    //
> -  *NumberOfDescriptors = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
> +  Number = 0;
> +  *NumberOfDescriptors = 0;
> +  *MemorySpaceMap = NULL;
> +  while (TRUE) {
> +    //
> +    // Allocate the MemorySpaceMap
> +    //
> +    if (Number != 0) {
> +      if (*MemorySpaceMap != NULL) {
> +        FreePool (*MemorySpaceMap);
> +      }
>
> -  //
> -  // Allocate the MemorySpaceMap
> -  //
> -  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
> -  if (*MemorySpaceMap == NULL) {
> -    Status = EFI_OUT_OF_RESOURCES;
> -    goto Done;
> -  }
> +      *MemorySpaceMap = AllocatePool (Number * sizeof(EFI_GCD_MEMORY_SPACE_DESCRIPTOR));

(1) Side comment: you dropped the space character after "sizeof".

> +      if (*MemorySpaceMap == NULL) {
> +        return EFI_OUT_OF_RESOURCES;
> +      }
>
> -  //
> -  // Fill in the MemorySpaceMap
> -  //
> -  Descriptor = *MemorySpaceMap;
> -  Link = mGcdMemorySpaceMap.ForwardLink;
> -  while (Link != &mGcdMemorySpaceMap) {
> -    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
> -    BuildMemoryDescriptor (Descriptor, Entry);
> -    Descriptor++;
> -    Link = Link->ForwardLink;
> +      *NumberOfDescriptors = Number;
> +    }
> +
> +    CoreAcquireGcdMemoryLock ();
> +
> +    Number = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
> +    if (Number == *NumberOfDescriptors) {
> +      //
> +      // Fill in the MemorySpaceMap
> +      //
> +      Descriptor = *MemorySpaceMap;
> +      Link = mGcdMemorySpaceMap.ForwardLink;
> +      while (Link != &mGcdMemorySpaceMap && Number != 0) {
> +        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
> +        BuildMemoryDescriptor (Descriptor, Entry);
> +        Descriptor++;
> +        Number--;
> +        Link = Link->ForwardLink;
> +      }
> +
> +      CoreReleaseGcdMemoryLock ();
> +      break;
> +    }
> +
> +    CoreReleaseGcdMemoryLock ();
>    }
> -  Status = EFI_SUCCESS;
>
> -Done:
> -  CoreReleaseGcdMemoryLock ();
> -  return Status;
> +  return EFI_SUCCESS;
>  }

I think this loop can be improved. Here's the facts that I don't like
about it:

(2) The "Number" variable name is bad. It should be "DescriptorCount".

(3) The ways the outer "if"s are formulated in the loop body are hard to
    read. In particular, I think what bothers me the most is that the
    loop body starts with the memory allocation, and not with the
    CoreCountGcdMapEntry() call. I think we can do better.

(4) We have one location in the code that acquires the lock, but two
    locations that release it. While it is technically correct, it's
    hard to read as well.

(5) Decreasing "Number" in the inner "while" loop, and checking it in
    the inner loop condition, seems strange. The GCD memory space map
    will have at least one entry at all times (minimally there is a
    NonExistent entry that covers the entire address space, according to
    the address width from the CPU HOB). In addition, I don't see when
    it could validly occur that Number doesn't match the length of the
    list.

How about the following instead (I'm quoting the full function, untested
/ uncompiled):

> EFI_STATUS
> EFIAPI
> CoreGetMemorySpaceMap (
>   OUT UINTN                            *NumberOfDescriptors,
>   OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
>   )
> {
>   LIST_ENTRY                       *Link;
>   EFI_GCD_MAP_ENTRY                *Entry;
>   EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
>   UINTN                            DescriptorCount;
>
>   //
>   // Make sure parameters are valid
>   //
>   if (NumberOfDescriptors == NULL) {
>     return EFI_INVALID_PARAMETER;
>   }
>   if (MemorySpaceMap == NULL) {
>     return EFI_INVALID_PARAMETER;
>   }
>
>   //
>   // No candidate map allocated yet.
>   //
>   *NumberOfDescriptors = 0;
>   *MemorySpaceMap = NULL;
>
>   //
>   // Take the lock, for entering the loop with the lock held.
>   //
>   CoreAcquireGcdMemoryLock ();
>   while (TRUE) {
>     //
>     // Count the descriptors.
>     //
>     DescriptorCount = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
>
>     if (DescriptorCount == *NumberOfDescriptors) {
>       //
>       // Fill in the MemorySpaceMap
>       //
>       Descriptor = *MemorySpaceMap;
>       Link = mGcdMemorySpaceMap.ForwardLink;
>       while (Link != &mGcdMemorySpaceMap) {
>         Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
>         BuildMemoryDescriptor (Descriptor, Entry);
>         Descriptor++;
>         Link = Link->ForwardLink;
>       }
>
>       //
>       // We're done; exit the loop with the lock held.
>       //
>       break;
>     }
>
>     CoreReleaseGcdMemoryLock ();
>
>     //
>     // Free previous allocation, with the lock released.
>     //
>     if (*MemorySpaceMap != NULL) {
>       FreePool (*MemorySpaceMap);
>     }
>
>     //
>     // Allocate the MemorySpaceMap, with the lock released.
>     //
>     *MemorySpaceMap = AllocatePool (
>                         (DescriptorCount *
>                          sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR))
>                         );
>     if (*MemorySpaceMap == NULL) {
>       *NumberOfDescriptors = 0;
>       return EFI_OUT_OF_RESOURCES;
>     }
>     *NumberOfDescriptors = DescriptorCount;
>
>     //
>     // Re-acquire the lock, for the next iteration.
>     //
>     CoreAcquireGcdMemoryLock ();
>   }
>
>   //
>   // We exited the loop with the lock held, release it.
>   //
>   CoreReleaseGcdMemoryLock ();
>
>   return EFI_SUCCESS;
> }
>

I don't insist on this variant, of course, it's just an idea to discuss!
If others find your variant easier to read, I'm fine with it as well.
(Still, I think points (1), (2) and (5) should be fixed.)


(6) I've snipped the CoreGetIoSpaceMap() changes because, AFAICS, they
mirror the CoreGetMemorySpaceMap() changes. However: do we *really* need
to update CoreGetIoSpaceMap()? Because, allocating pool memory, even if
it ends up allocating page memory, should be independent of the *IO Port
space* in GCD.

If you fix CoreGetMemorySpaceMap() but don't touch CoreGetIoSpaceMap(),
can you actually trigger a deadlock (with or without enabling the UAF
guard)?

Thanks,
Laszlo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/5] MdeModulePkg/Core: add freed-memory guard feature
  2018-10-23 14:53 ` [PATCH v2 4/5] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
@ 2018-10-23 18:29   ` Laszlo Ersek
  0 siblings, 0 replies; 12+ messages in thread
From: Laszlo Ersek @ 2018-10-23 18:29 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Star Zeng

On 10/23/18 16:53, Jian J Wang wrote:
>> v2 changes:
>> a. Change prototype and implementation of IsHeapGuardEnabled()
>>    to allow it to check freed-memory guard feature.
>> b. Drop IsUafEnabled() because of a.
>> c. Move the sanity check of freed-memory guard and heap guard
>>    into HeapGuardCpuArchProtocolNotify()
>> d. Add GuardFreedPagesChecked() to avoid duplicate feature check
>> e. Coding style cleanup
> 
> Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
> which is illegal access to memory which has been freed. The principle
> behind is similar to heap guard feature, that is we'll turn all pool
> memory allocation to page allocation and mark them to be not-present
> once they are freed.
> 
> This also implies that, once a page is allocated and freed, it cannot
> be re-allocated. This will bring another issue, which is that there's
> risk that memory space will be used out. To address it, the memory
> service add logic to put part (at most 64 pages a time) of freed pages
> back into page pool, so that the memory service can still have memory
> to allocate, when all memory space have been allocated once. This is
> called memory promotion. The promoted pages are always from the eldest
> pages which haven been freed.
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  MdeModulePkg/Core/Dxe/Mem/HeapGuard.c | 409 +++++++++++++++++++++++++++++++++-
>  MdeModulePkg/Core/Dxe/Mem/HeapGuard.h |  63 +++++-
>  MdeModulePkg/Core/Dxe/Mem/Page.c      |  41 +++-
>  MdeModulePkg/Core/Dxe/Mem/Pool.c      |  21 +-
>  4 files changed, 513 insertions(+), 21 deletions(-)

I don't know when I will find the time to review this patch. Please make
sure that with BIT4 clear in the PCD, the changes are a no-op.

I'd prefer if you could regression-test the changes on OVMF as well, not
just on physical platforms.

Other than that, until I find the time, please proceed with the normal
review workflow -- feel free to submit further versions, according to
the MdeModulePkg maintainers' comments, and/or even push the final
version, should I prove unable to comment on this patch in time.

Thanks!
Laszlo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/5] MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature
  2018-10-23 16:09   ` Laszlo Ersek
@ 2018-10-24  0:45     ` Wang, Jian J
  0 siblings, 0 replies; 12+ messages in thread
From: Wang, Jian J @ 2018-10-24  0:45 UTC (permalink / raw)
  To: Laszlo Ersek, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Zeng, Star

Laszlo,

Thank you very much for the comments. I went through all of them in other
patch emails and I think I have no objection. So to save all of us time I'm not
going to respond them separately. I'll send out v3 patch soon. Thanks again.

Regards,
Jian


> -----Original Message-----
> From: Laszlo Ersek [mailto:lersek@redhat.com]
> Sent: Wednesday, October 24, 2018 12:09 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Zeng, Star
> <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH v2 1/5] MdeModulePkg/MdeModulePkg.dec: update
> PCD description for new feature
> 
> On 10/23/18 16:53, Jian J Wang wrote:
> >> v2 changes:
> >> a. Drop PCD PcdUseAfterFreeDetectionPropertyMask. Use BIT4 of
> >>    PcdHeapGuardPropertyMask instead. Update related descriptions in both
> >>    dec and uni files.
> >
> > Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
> > which is illegal access to memory which has been freed. The principle
> > behind is similar to heap guard feature, that is we'll turn all pool
> > memory allocation to page allocation and mark them to be not-present
> > once they are freed.
> >
> > This also implies that, once a page is allocated and freed, it cannot
> > be re-allocated. This will bring another issue, which is that there's
> > risk that memory space will be used out. To address it, this patch
> > series add logic put part (at most 64 pages a time) of freed pages
> > back into page pool, so that the memory service can still have memory
> > to allocate, when all memory space have been allocated once. This is
> > called memory promotion. The promoted pages are always from the eldest
> > pages freed.
> >
> > BIT4 of following PCD is used to enable the freed-memory guard feature.
> >
> >   gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> >
> > Please note this feature cannot be enabled with heap guard at the same
> > time.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >  MdeModulePkg/MdeModulePkg.dec | 10 ++++++++++
> >  MdeModulePkg/MdeModulePkg.uni |  6 +++++-
> >  2 files changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/MdeModulePkg/MdeModulePkg.dec
> b/MdeModulePkg/MdeModulePkg.dec
> > index 6037504fa7..255b92ea67 100644
> > --- a/MdeModulePkg/MdeModulePkg.dec
> > +++ b/MdeModulePkg/MdeModulePkg.dec
> > @@ -955,6 +955,8 @@
> >    # free pages for all of them. The page allocation for the type related to
> >    # cleared bits keeps the same as ususal.
> >    #
> > +  # This PCD is only valid if BIT0 and/or BIT2 are set in
> PcdHeapGuardPropertyMask.
> > +  #
> >    # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
> >    #  EfiReservedMemoryType             0x0000000000000001<BR>
> >    #  EfiLoaderCode                     0x0000000000000002<BR>
> > @@ -984,6 +986,8 @@
> >    # if there's enough free memory for all of them. The pool allocation for the
> >    # type related to cleared bits keeps the same as ususal.
> >    #
> > +  # This PCD is only valid if BIT1 and/or BIT3 are set in
> PcdHeapGuardPropertyMask.
> > +  #
> >    # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
> >    #  EfiReservedMemoryType             0x0000000000000001<BR>
> >    #  EfiLoaderCode                     0x0000000000000002<BR>
> > @@ -1007,14 +1011,20 @@
> >
> gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType|0x0|UINT64|0x30
> 001053
> >
> >    ## This mask is to control Heap Guard behavior.
> > +  #
> >    # Note that due to the limit of pool memory implementation and the
> alignment
> >    # requirement of UEFI spec, BIT7 is a try-best setting which cannot
> guarantee
> >    # that the returned pool is exactly adjacent to head guard page or tail guard
> >    # page.
> 
> (1) The above changes are not related to the use-after-free feature;
> they should go into a separate cleanup patch. (Which is very welcome
> otherwise.) The cleanup patch should be patch #1 in the series.
> 
> The subject should be, for example:
> 
>   MdeModulePkg: clean up Heap Guard PageType / PoolType PCD
> documentation
> 
> (71 characters)
> 
> > +  #
> > +  # Note that UEFI freed-memory guard and pool/page guard cannot be
> enabled
> > +  # at the same time.
> > +  #
> >    #   BIT0 - Enable UEFI page guard.<BR>
> >    #   BIT1 - Enable UEFI pool guard.<BR>
> >    #   BIT2 - Enable SMM page guard.<BR>
> >    #   BIT3 - Enable SMM pool guard.<BR>
> > +  #   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory
> detection).<BR>
> >    #   BIT6 - Enable non-stop mode.<BR>
> >    #   BIT7 - The direction of Guard Page for Pool Guard.
> >    #          0 - The returned pool is near the tail guard page.<BR>
> 
> (2) This should go into patch #2 in the series. However, the current
> subject line is useless:
> 
>   MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature
> 
> Instead, I suggest:
> 
>   MdeModulePkg: introduce UEFI Use After Free feature bit in Heap Guard PCD
> 
> (73 characters).
> 
> > diff --git a/MdeModulePkg/MdeModulePkg.uni
> b/MdeModulePkg/MdeModulePkg.uni
> > index a6bcb627cf..e72b893509 100644
> > --- a/MdeModulePkg/MdeModulePkg.uni
> > +++ b/MdeModulePkg/MdeModulePkg.uni
> > @@ -1171,6 +1171,7 @@
> >                                                                                          " before and after
> corresponding type of pages allocated if there's enough\n"
> >                                                                                          " free pages for all of them.
> The page allocation for the type related to\n"
> >                                                                                          " cleared bits keeps the same
> as ususal.\n\n"
> > +                                                                                        " This PCD is only valid if BIT0
> and/or BIT2 are set in PcdHeapGuardPropertyMask.\n\n"
> >                                                                                          " Below is bit mask for this
> PCD: (Order is same as UEFI spec)<BR>\n"
> >                                                                                          "  EfiReservedMemoryType
> 0x0000000000000001\n"
> >                                                                                          "  EfiLoaderCode
> 0x0000000000000002\n"
> > @@ -1198,6 +1199,7 @@
> >                                                                                          " before and after
> corresponding type of pages which the allocated pool occupies,\n"
> >                                                                                          " if there's enough free
> memory for all of them. The pool allocation for the\n"
> >                                                                                          " type related to cleared bits
> keeps the same as ususal.\n\n"
> > +                                                                                        " This PCD is only valid if BIT1
> and/or BIT3 are set in PcdHeapGuardPropertyMask.\n\n"
> >                                                                                          " Below is bit mask for this
> PCD: (Order is same as UEFI spec)<BR>\n"
> >                                                                                          "  EfiReservedMemoryType
> 0x0000000000000001\n"
> >                                                                                          "  EfiLoaderCode
> 0x0000000000000002\n"
> 
> (3) Same as (1).
> 
> > @@ -1225,11 +1227,13 @@
> >                                                                                              "Note that due to the limit
> of pool memory implementation and the alignment\n"
> >                                                                                              "requirement of UEFI spec,
> BIT7 is a try-best setting which cannot guarantee\n"
> >                                                                                              "that the returned pool is
> exactly adjacent to head guard page or tail guard\n"
> > -                                                                                            "page.\n"
> > +                                                                                            "page.\n\n"
> > +                                                                                            "Note that UEFI freed-
> memory guard and pool/page guard cannot be enabled at the same time.\n\n"
> >                                                                                              "   BIT0 - Enable UEFI page
> guard.<BR>\n"
> >                                                                                              "   BIT1 - Enable UEFI pool
> guard.<BR>\n"
> >                                                                                              "   BIT2 - Enable SMM page
> guard.<BR>\n"
> >                                                                                              "   BIT3 - Enable SMM pool
> guard.<BR>\n"
> > +                                                                                            "   BIT4 - Enable UEFI
> freed-memory guard (Use-After-Free memory detection).<BR>\n"
> >                                                                                              "   BIT7 - The direction of
> Guard Page for Pool Guard.\n"
> >                                                                                              "          0 - The returned pool
> is near the tail guard page.<BR>\n"
> >                                                                                              "          1 - The returned pool
> is near the head guard page.<BR>"
> >
> 
> (4) Same as (2).
> 
> With those changes:
> 
> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
> 
> Thanks
> Laszlo

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-10-24  0:45 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-10-23 14:53 [PATCH v2 0/5] Add freed-memory guard feature Jian J Wang
2018-10-23 14:53 ` [PATCH v2 1/5] MdeModulePkg/MdeModulePkg.dec: update PCD description for new feature Jian J Wang
2018-10-23 16:09   ` Laszlo Ersek
2018-10-24  0:45     ` Wang, Jian J
2018-10-23 14:53 ` [PATCH v2 2/5] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
2018-10-23 16:41   ` Laszlo Ersek
2018-10-23 14:53 ` [PATCH v2 3/5] MdeModulePkg/Core: fix a lock issue in GCD memory map dump Jian J Wang
2018-10-23 18:26   ` Laszlo Ersek
2018-10-23 14:53 ` [PATCH v2 4/5] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
2018-10-23 18:29   ` Laszlo Ersek
2018-10-23 14:53 ` [PATCH v2 5/5] MdeModulePkg/Core: fix-up for changes introduced by freed-memory guard Jian J Wang
2018-10-23 17:16   ` Laszlo Ersek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox