public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [PATCH v3 0/6] Introduce freed-memory guard feature
@ 2018-10-24  5:26 Jian J Wang
  2018-10-24  5:26 ` [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
                   ` (5 more replies)
  0 siblings, 6 replies; 16+ messages in thread
From: Jian J Wang @ 2018-10-24  5:26 UTC (permalink / raw)
  To: edk2-devel

> v3 changes:
> Updated per comments from Laszlo. Please refer to individual patch
> file for details

Freed-memory guard is a new feauture used to detect UAF (Use-After-Free)
memory issue.

Tests:
a. Feature basic unit/functionality  test
b. OVMF regression test

Jian J Wang (6):
  MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation
  MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD
  UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode
  UefiCpuPkg/CpuDxe: prevent recursive calling of
    InitializePageTablePool
  MdeModulePkg/Core: prevent re-acquire GCD memory lock
  MdeModulePkg/Core: add freed-memory guard feature

 MdeModulePkg/Core/Dxe/Gcd/Gcd.c               |  80 +++--
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409 +++++++++++++++++++++++++-
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
 MdeModulePkg/Core/Dxe/Mem/Page.c              |  41 ++-
 MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
 MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
 MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
 MdeModulePkg/MdeModulePkg.dec                 |  10 +
 MdeModulePkg/MdeModulePkg.uni                 |   6 +-
 UefiCpuPkg/CpuDxe/CpuDxe.h                    |   2 +-
 UefiCpuPkg/CpuDxe/CpuPageTable.c              |  23 +-
 11 files changed, 615 insertions(+), 64 deletions(-)

-- 
2.16.2.windows.1



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation
  2018-10-24  5:26 [PATCH v3 0/6] Introduce freed-memory guard feature Jian J Wang
@ 2018-10-24  5:26 ` Jian J Wang
  2018-10-25  2:55   ` Zeng, Star
  2018-10-24  5:26 ` [PATCH v3 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD Jian J Wang
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Jian J Wang @ 2018-10-24  5:26 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v3 changes:
> a. split from #1 patch of v2
> b. update title

This cleanup is meant for avoiding misuse of newly introduced BIT4
(UAF detection) of PCD PcdHeapGuardPropertyMask, because it applies
to all types of physical memory. In another words,
PcdHeapGuardPoolType and PcdHeapGuardPageType don't have effect to
the BIT4 of PcdHeapGuardPropertyMask.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/MdeModulePkg.dec | 4 ++++
 MdeModulePkg/MdeModulePkg.uni | 2 ++
 2 files changed, 6 insertions(+)

diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
index 6037504fa7..2009dbc5fd 100644
--- a/MdeModulePkg/MdeModulePkg.dec
+++ b/MdeModulePkg/MdeModulePkg.dec
@@ -955,6 +955,8 @@
   # free pages for all of them. The page allocation for the type related to
   # cleared bits keeps the same as ususal.
   #
+  # This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.
+  #
   # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
   #  EfiReservedMemoryType             0x0000000000000001<BR>
   #  EfiLoaderCode                     0x0000000000000002<BR>
@@ -984,6 +986,8 @@
   # if there's enough free memory for all of them. The pool allocation for the
   # type related to cleared bits keeps the same as ususal.
   #
+  # This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.
+  #
   # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
   #  EfiReservedMemoryType             0x0000000000000001<BR>
   #  EfiLoaderCode                     0x0000000000000002<BR>
diff --git a/MdeModulePkg/MdeModulePkg.uni b/MdeModulePkg/MdeModulePkg.uni
index a6bcb627cf..9d2e473fa9 100644
--- a/MdeModulePkg/MdeModulePkg.uni
+++ b/MdeModulePkg/MdeModulePkg.uni
@@ -1171,6 +1171,7 @@
                                                                                         " before and after corresponding type of pages allocated if there's enough\n"
                                                                                         " free pages for all of them. The page allocation for the type related to\n"
                                                                                         " cleared bits keeps the same as ususal.\n\n"
+                                                                                        " This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.\n\n"
                                                                                         " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
                                                                                         "  EfiReservedMemoryType             0x0000000000000001\n"
                                                                                         "  EfiLoaderCode                     0x0000000000000002\n"
@@ -1198,6 +1199,7 @@
                                                                                         " before and after corresponding type of pages which the allocated pool occupies,\n"
                                                                                         " if there's enough free memory for all of them. The pool allocation for the\n"
                                                                                         " type related to cleared bits keeps the same as ususal.\n\n"
+                                                                                        " This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.\n\n"
                                                                                         " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
                                                                                         "  EfiReservedMemoryType             0x0000000000000001\n"
                                                                                         "  EfiLoaderCode                     0x0000000000000002\n"
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD
  2018-10-24  5:26 [PATCH v3 0/6] Introduce freed-memory guard feature Jian J Wang
  2018-10-24  5:26 ` [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
@ 2018-10-24  5:26 ` Jian J Wang
  2018-10-25  3:02   ` Zeng, Star
  2018-10-24  5:26 ` [PATCH v3 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode Jian J Wang
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Jian J Wang @ 2018-10-24  5:26 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v3 changes:
> a. split from v2 #1 patch file.
> b. refine the commit message and title.

UAF (Use-After-Free) memory issue is kind of illegal access to memory
which has been freed. It can be detected by a new freed-memory guard
enforced onto freed memory.

BIT4 of following PCD is used to enable the freed-memory guard feature.

  gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask

Please note this feature is for debug purpose and should not be enabled
in product BIOS, and cannot be enabled with pool/page heap guard at the
same time. It's disabled by default.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/MdeModulePkg.dec | 6 ++++++
 MdeModulePkg/MdeModulePkg.uni | 4 +++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
index 2009dbc5fd..255b92ea67 100644
--- a/MdeModulePkg/MdeModulePkg.dec
+++ b/MdeModulePkg/MdeModulePkg.dec
@@ -1011,14 +1011,20 @@
   gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType|0x0|UINT64|0x30001053
 
   ## This mask is to control Heap Guard behavior.
+  #
   # Note that due to the limit of pool memory implementation and the alignment
   # requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee
   # that the returned pool is exactly adjacent to head guard page or tail guard
   # page.
+  #
+  # Note that UEFI freed-memory guard and pool/page guard cannot be enabled
+  # at the same time.
+  #
   #   BIT0 - Enable UEFI page guard.<BR>
   #   BIT1 - Enable UEFI pool guard.<BR>
   #   BIT2 - Enable SMM page guard.<BR>
   #   BIT3 - Enable SMM pool guard.<BR>
+  #   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>
   #   BIT6 - Enable non-stop mode.<BR>
   #   BIT7 - The direction of Guard Page for Pool Guard.
   #          0 - The returned pool is near the tail guard page.<BR>
diff --git a/MdeModulePkg/MdeModulePkg.uni b/MdeModulePkg/MdeModulePkg.uni
index 9d2e473fa9..e72b893509 100644
--- a/MdeModulePkg/MdeModulePkg.uni
+++ b/MdeModulePkg/MdeModulePkg.uni
@@ -1227,11 +1227,13 @@
                                                                                             "Note that due to the limit of pool memory implementation and the alignment\n"
                                                                                             "requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee\n"
                                                                                             "that the returned pool is exactly adjacent to head guard page or tail guard\n"
-                                                                                            "page.\n"
+                                                                                            "page.\n\n"
+                                                                                            "Note that UEFI freed-memory guard and pool/page guard cannot be enabled at the same time.\n\n"
                                                                                             "   BIT0 - Enable UEFI page guard.<BR>\n"
                                                                                             "   BIT1 - Enable UEFI pool guard.<BR>\n"
                                                                                             "   BIT2 - Enable SMM page guard.<BR>\n"
                                                                                             "   BIT3 - Enable SMM pool guard.<BR>\n"
+                                                                                            "   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>\n"
                                                                                             "   BIT7 - The direction of Guard Page for Pool Guard.\n"
                                                                                             "          0 - The returned pool is near the tail guard page.<BR>\n"
                                                                                             "          1 - The returned pool is near the head guard page.<BR>"
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode
  2018-10-24  5:26 [PATCH v3 0/6] Introduce freed-memory guard feature Jian J Wang
  2018-10-24  5:26 ` [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
  2018-10-24  5:26 ` [PATCH v3 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD Jian J Wang
@ 2018-10-24  5:26 ` Jian J Wang
  2018-10-24  5:26 ` [PATCH v3 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool Jian J Wang
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Jian J Wang @ 2018-10-24  5:26 UTC (permalink / raw)
  To: edk2-devel
  Cc: Laszlo Ersek, Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni

> v3 changes:
> a. split from #2 patch of v2
> b. refine the commit message
> c. refine the title

Non-stop mode was introduced / explained in commit 8f2613628acf
("MdeModulePkg/MdeModulePkg.dec: add new settings for PCDs",
2018-08-30).

The macro HEAP_GUARD_NONSTOP_MODE was added to CpuDxe in commit
dcc026217fdc ("UefiCpuPkg/CpuDxe: implement non-stop mode for uefi",
2018-08-30).

Another instance of the macro HEAP_GUARD_NONSTOP_MODE was added to
PiSmmCpuDxeSmm -- with BIT1|BIT0 replaced with BIT3|BIT2 -- in commit
09afd9a42a7f ("UefiCpuPkg/PiSmmCpuDxeSmm: implement non-stop mode for
SMM", 2018-08-30)

Since the freed-memory guard is for UEFI-only. This patch only updates
HEAP_GUARD_NONSTOP_MODE in "UefiCpuPkg/CpuDxe/CpuDxe.h" (add BIT4).

Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 UefiCpuPkg/CpuDxe/CpuDxe.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/UefiCpuPkg/CpuDxe/CpuDxe.h b/UefiCpuPkg/CpuDxe/CpuDxe.h
index 064ea05bba..3183a3f7f4 100644
--- a/UefiCpuPkg/CpuDxe/CpuDxe.h
+++ b/UefiCpuPkg/CpuDxe/CpuDxe.h
@@ -58,7 +58,7 @@
                                        )
 
 #define HEAP_GUARD_NONSTOP_MODE       \
-        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT1|BIT0)) > BIT6)
+        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT4|BIT1|BIT0)) > BIT6)
 
 #define NULL_DETECTION_NONSTOP_MODE   \
         ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & (BIT6|BIT0)) > BIT6)
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool
  2018-10-24  5:26 [PATCH v3 0/6] Introduce freed-memory guard feature Jian J Wang
                   ` (2 preceding siblings ...)
  2018-10-24  5:26 ` [PATCH v3 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode Jian J Wang
@ 2018-10-24  5:26 ` Jian J Wang
  2018-10-24  5:26 ` [PATCH v3 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock Jian J Wang
  2018-10-24  5:26 ` [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
  5 siblings, 0 replies; 16+ messages in thread
From: Jian J Wang @ 2018-10-24  5:26 UTC (permalink / raw)
  To: edk2-devel
  Cc: Laszlo Ersek, Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni

> v3 changes:
> a. split from #2 patch
> b. refine the commit message and title
> c. remove "else" branch

The freed-memory guard feature will cause an recursive calling
of InitializePageTablePool(). This is due to a fact that
AllocateAlignedPages() is used to allocate page table pool memory.
This function will most likely call gBS->FreePages to free unaligned
pages and then cause another round of page attributes change, like
below (freed pages will be always marked not-present if freed-memory
guard is enabled)

   FreePages() <===============|
=> CpuSetMemoryAttributes()    |
=> <if out of page table>      |
=> InitializePageTablePool()   |
=> AllocateAlignedPages()      |
=> FreePages() ================|

The solution is add a global variable as a lock in page table pool
allocation function and fail any other requests if it has not been
done.

Since this issue will only happen if free-memory guard is enabled,
it won't affect CpuSetMemoryAttributes() in default build of a BIOS.

If free-memory guard is enabled, it only affect the pages
(failed to mark them as not-present) freed in AllocateAlignedPages().

Since those freed pages haven't been used yet (their addresses not
yet exposed to code outside AllocateAlignedPages), it won't compromise
the freed-memory guard feature.

This change will just fail the CpuSetMemoryAttributes() called from
FreePages() but it won't fail the FreePages(). So the error status
won't be propagated to upper layer of code.

Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 UefiCpuPkg/CpuDxe/CpuPageTable.c | 23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTable.c
index 33e8ee2d2c..4bee8c7772 100644
--- a/UefiCpuPkg/CpuDxe/CpuPageTable.c
+++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c
@@ -100,6 +100,7 @@ PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
 };
 
 PAGE_TABLE_POOL                   *mPageTablePool = NULL;
+BOOLEAN                           mPageTablePoolLock = FALSE;
 PAGE_TABLE_LIB_PAGING_CONTEXT     mPagingContext;
 EFI_SMM_BASE2_PROTOCOL            *mSmmBase2 = NULL;
 
@@ -1046,6 +1047,16 @@ InitializePageTablePool (
   VOID                      *Buffer;
   BOOLEAN                   IsModified;
 
+  //
+  // Do not allow re-entrance.
+  //
+  if (mPageTablePoolLock) {
+    return FALSE;
+  }
+
+  mPageTablePoolLock = TRUE;
+  IsModified = FALSE;
+
   //
   // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one page for
   // header.
@@ -1056,9 +1067,15 @@ InitializePageTablePool (
   Buffer = AllocateAlignedPages (PoolPages, PAGE_TABLE_POOL_ALIGNMENT);
   if (Buffer == NULL) {
     DEBUG ((DEBUG_ERROR, "ERROR: Out of aligned pages\r\n"));
-    return FALSE;
+    goto Done;
   }
 
+  DEBUG ((
+    DEBUG_INFO,
+    "Paging: added %lu pages to page table pool\r\n",
+    (UINT64)PoolPages
+    ));
+
   //
   // Link all pools into a list for easier track later.
   //
@@ -1092,7 +1109,9 @@ InitializePageTablePool (
     );
   ASSERT (IsModified == TRUE);
 
-  return TRUE;
+Done:
+  mPageTablePoolLock = FALSE;
+  return IsModified;
 }
 
 /**
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock
  2018-10-24  5:26 [PATCH v3 0/6] Introduce freed-memory guard feature Jian J Wang
                   ` (3 preceding siblings ...)
  2018-10-24  5:26 ` [PATCH v3 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool Jian J Wang
@ 2018-10-24  5:26 ` Jian J Wang
  2018-10-25  3:22   ` Zeng, Star
  2018-10-24  5:26 ` [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
  5 siblings, 1 reply; 16+ messages in thread
From: Jian J Wang @ 2018-10-24  5:26 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v3 changes:
> a. drop the changes to CoreGetIoSpaceMap() because it won't cause
>    problem
> b. refine the logic in changes of CoreGetMemorySpaceMap()
>    and add more comments

This issue is hidden in current code but exposed by introduction
of freed-memory guard feature due to the fact that the feature
will turn all pool allocation to page allocation.

The solution is move the memory allocation in CoreGetMemorySpaceMap()
and CoreGetIoSpaceMap() to be out of the GCD memory map lock.

Although it's rare, a while-loop is added to make sure that the count
of descriptor of memory map is the same after memory allocation, because
it's executed outside the GCD memory lock.

   CoreDumpGcdMemorySpaceMap()
=> CoreGetMemorySpaceMap()
=> CoreAcquireGcdMemoryLock () *
   AllocatePool()
=> InternalAllocatePool()
=> CoreAllocatePool()
=> CoreAllocatePoolI()
=> CoreAllocatePoolPagesI()
=> CoreAllocatePoolPages()
=> FindFreePages()
=> PromoteMemoryResource()
=> CoreAcquireGcdMemoryLock()  **

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 80 +++++++++++++++++++++++++++--------------
 1 file changed, 54 insertions(+), 26 deletions(-)

diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
index d9c65a8045..bc02945721 100644
--- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
+++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
@@ -1691,10 +1691,10 @@ CoreGetMemorySpaceMap (
   OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
   )
 {
-  EFI_STATUS                       Status;
   LIST_ENTRY                       *Link;
   EFI_GCD_MAP_ENTRY                *Entry;
   EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
+  UINTN                            DescriptorCount;
 
   //
   // Make sure parameters are valid
@@ -1706,38 +1706,66 @@ CoreGetMemorySpaceMap (
     return EFI_INVALID_PARAMETER;
   }
 
+  *NumberOfDescriptors  = 0;
+  *MemorySpaceMap       = NULL;
+
   CoreAcquireGcdMemoryLock ();
 
-  //
-  // Count the number of descriptors
-  //
-  *NumberOfDescriptors = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
+  while (TRUE) {
+    //
+    // Count the number of descriptors. It might be done more than once because
+    // there's code which has to be running outside the GCD lock.
+    //
+    DescriptorCount = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
+    if (DescriptorCount == *NumberOfDescriptors) {
+      //
+      // Fill in the MemorySpaceMap if no memory space map change.
+      //
+      Descriptor = *MemorySpaceMap;
+      Link = mGcdMemorySpaceMap.ForwardLink;
+      while (Link != &mGcdMemorySpaceMap) {
+        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
+        BuildMemoryDescriptor (Descriptor, Entry);
+        Descriptor++;
+        Link = Link->ForwardLink;
+      }
 
-  //
-  // Allocate the MemorySpaceMap
-  //
-  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
-  if (*MemorySpaceMap == NULL) {
-    Status = EFI_OUT_OF_RESOURCES;
-    goto Done;
-  }
+      break;
+    }
 
-  //
-  // Fill in the MemorySpaceMap
-  //
-  Descriptor = *MemorySpaceMap;
-  Link = mGcdMemorySpaceMap.ForwardLink;
-  while (Link != &mGcdMemorySpaceMap) {
-    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
-    BuildMemoryDescriptor (Descriptor, Entry);
-    Descriptor++;
-    Link = Link->ForwardLink;
+    //
+    // Release the lock before memory allocation, because it might cause
+    // GCD lock conflict in one of calling path in AllocatPool().
+    //
+    CoreReleaseGcdMemoryLock ();
+
+    //
+    // Allocate memory to store the MemorySpaceMap. Note it might be already
+    // allocated if there's map descriptor change during memory allocation at
+    // last time.
+    //
+    if (*MemorySpaceMap != NULL) {
+      FreePool (*MemorySpaceMap);
+    }
+
+    *MemorySpaceMap = AllocatePool (DescriptorCount *
+                                    sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
+    if (*MemorySpaceMap == NULL) {
+      *NumberOfDescriptors = 0;
+      return EFI_OUT_OF_RESOURCES;
+    }
+
+    //
+    // Save the descriptor number got before for another round of check to make
+    // sure we won't miss any, since we have code running outside the GCD lock.
+    //
+    *NumberOfDescriptors = DescriptorCount;
+    CoreAcquireGcdMemoryLock ();
   }
-  Status = EFI_SUCCESS;
 
-Done:
   CoreReleaseGcdMemoryLock ();
-  return Status;
+
+  return EFI_SUCCESS;
 }
 
 
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature
  2018-10-24  5:26 [PATCH v3 0/6] Introduce freed-memory guard feature Jian J Wang
                   ` (4 preceding siblings ...)
  2018-10-24  5:26 ` [PATCH v3 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock Jian J Wang
@ 2018-10-24  5:26 ` Jian J Wang
  2018-10-25  3:37   ` Zeng, Star
  5 siblings, 1 reply; 16+ messages in thread
From: Jian J Wang @ 2018-10-24  5:26 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v3 changes:
> a. Merge from #4 and #5 of v2 patch
> b. Coding style cleanup

Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
which is illegal access to memory which has been freed. The principle
behind is similar to heap guard feature, that is we'll turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed. The freed memory block will not be added back into
memory pool.

This means that, once a page is allocated and freed, it cannot be
re-allocated. This will bring an issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.

This feature brings another problem is that memory map descriptors will
be increased enormously (200+ -> 2000+). One of change in this patch
is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
freed pages back into the memory map. Now the number can stay at around
510.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409 +++++++++++++++++++++++++-
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
 MdeModulePkg/Core/Dxe/Mem/Page.c              |  41 ++-
 MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
 MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
 MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
 6 files changed, 524 insertions(+), 34 deletions(-)

diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
index 663f969c0d..449a022658 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
@@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
 GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
                                     = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
 
+//
+// Used for promoting freed but not used pages.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS mLastPromotedPage = BASE_4GB;
+
 /**
   Set corresponding bits in bitmap table to 1 according to the address.
 
@@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
 
   @return An integer containing the guarded memory bitmap.
 **/
-UINTN
+UINT64
 GetGuardedMemoryBits (
   IN EFI_PHYSICAL_ADDRESS    Address,
   IN UINTN                   NumberOfPages
@@ -387,7 +392,7 @@ GetGuardedMemoryBits (
 {
   UINT64            *BitMap;
   UINTN             Bits;
-  UINTN             Result;
+  UINT64            Result;
   UINTN             Shift;
   UINTN             BitsToUnitEnd;
 
@@ -660,15 +665,16 @@ IsPageTypeToGuard (
 /**
   Check to see if the heap guard is enabled for page and/or pool allocation.
 
+  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
+
   @return TRUE/FALSE.
 **/
 BOOLEAN
 IsHeapGuardEnabled (
-  VOID
+  UINT8           GuardType
   )
 {
-  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
-                              GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
+  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages, GuardType);
 }
 
 /**
@@ -1203,6 +1209,380 @@ SetAllGuardPages (
   }
 }
 
+/**
+  Find the address of top-most guarded free page.
+
+  @param[out]  Address    Start address of top-most guarded free page.
+
+  @return VOID.
+**/
+VOID
+GetLastGuardedFreePageAddress (
+  OUT EFI_PHYSICAL_ADDRESS      *Address
+  )
+{
+  EFI_PHYSICAL_ADDRESS    AddressGranularity;
+  EFI_PHYSICAL_ADDRESS    BaseAddress;
+  UINTN                   Level;
+  UINT64                  Map;
+  INTN                    Index;
+
+  ASSERT (mMapLevel >= 1);
+
+  BaseAddress = 0;
+  Map = mGuardedMemoryMap;
+  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
+       ++Level) {
+    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
+
+    //
+    // Find the non-NULL entry at largest index.
+    //
+    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
+      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
+        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
+        Map = ((UINT64 *)(UINTN)Map)[Index];
+        break;
+      }
+    }
+  }
+
+  //
+  // Find the non-zero MSB then get the page address.
+  //
+  while (Map != 0) {
+    Map = RShiftU64 (Map, 1);
+    BaseAddress += EFI_PAGES_TO_SIZE (1);
+  }
+
+  *Address = BaseAddress;
+}
+
+/**
+  Record freed pages.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+MarkFreedPages (
+  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
+  IN UINTN                    Pages
+  )
+{
+  SetGuardedMemoryBits (BaseAddress, Pages);
+}
+
+/**
+  Record freed pages as well as mark them as not-present.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPages (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  )
+{
+  EFI_STATUS      Status;
+
+  //
+  // Legacy memory lower than 1MB might be accessed with no allocation. Leave
+  // them alone.
+  //
+  if (BaseAddress < BASE_1MB) {
+    return;
+  }
+
+  MarkFreedPages (BaseAddress, Pages);
+  if (gCpu != NULL) {
+    //
+    // Set flag to make sure allocating memory without GUARD for page table
+    // operation; otherwise infinite loops could be caused.
+    //
+    mOnGuarding = TRUE;
+    //
+    // Note: This might overwrite other attributes needed by other features,
+    // such as NX memory protection.
+    //
+    Status = gCpu->SetMemoryAttributes (
+                     gCpu,
+                     BaseAddress,
+                     EFI_PAGES_TO_SIZE (Pages),
+                     EFI_MEMORY_RP
+                     );
+    //
+    // Normally we should ASSERT the returned Status. But there might be memory
+    // alloc/free involved in SetMemoryAttributes(), which might fail this
+    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
+    //
+    if (EFI_ERROR (Status)) {
+      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%lu)\n", BaseAddress, (UINT64)Pages));
+    }
+    mOnGuarding = FALSE;
+  }
+}
+
+/**
+  Record freed pages as well as mark them as not-present, if enabled.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPagesChecked (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  )
+{
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    GuardFreedPages (BaseAddress, Pages);
+  }
+}
+
+/**
+  Mark all pages freed before CPU Arch Protocol as not-present.
+
+**/
+VOID
+GuardAllFreedPages (
+  VOID
+  )
+{
+  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    TableEntry;
+  UINT64    Address;
+  UINT64    GuardPage;
+  INTN      Level;
+  UINTN     BitIndex;
+  UINTN     GuardPageNumber;
+
+  if (mGuardedMemoryMap == 0 ||
+      mMapLevel == 0 ||
+      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
+    return;
+  }
+
+  CopyMem (Entries, mLevelMask, sizeof (Entries));
+  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
+
+  SetMem (Tables, sizeof(Tables), 0);
+  SetMem (Addresses, sizeof(Addresses), 0);
+  SetMem (Indices, sizeof(Indices), 0);
+
+  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+  Tables[Level]   = mGuardedMemoryMap;
+  Address         = 0;
+  GuardPage       = (UINT64)-1;
+  GuardPageNumber = 0;
+
+  while (TRUE) {
+    if (Indices[Level] > Entries[Level]) {
+      Tables[Level] = 0;
+      Level        -= 1;
+    } else {
+      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
+      Address     = Addresses[Level];
+
+      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+        Level            += 1;
+        Tables[Level]     = TableEntry;
+        Addresses[Level]  = Address;
+        Indices[Level]    = 0;
+
+        continue;
+      } else {
+        BitIndex = 1;
+        while (BitIndex != 0) {
+          if ((TableEntry & BitIndex) != 0) {
+            if (GuardPage == (UINT64)-1) {
+              GuardPage = Address;
+            }
+            ++GuardPageNumber;
+          } else if (GuardPageNumber > 0) {
+            GuardFreedPages (GuardPage, GuardPageNumber);
+            GuardPageNumber = 0;
+            GuardPage       = (UINT64)-1;
+          }
+
+          if (TableEntry == 0) {
+            break;
+          }
+
+          Address += EFI_PAGES_TO_SIZE (1);
+          BitIndex = LShiftU64 (BitIndex, 1);
+        }
+      }
+    }
+
+    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
+      break;
+    }
+
+    Indices[Level] += 1;
+    Address = (Level == 0) ? 0 : Addresses[Level - 1];
+    Addresses[Level] = Address | LShiftU64 (Indices[Level], Shifts[Level]);
+
+  }
+
+  //
+  // Update the maximum address of freed page which can be used for memory
+  // promotion upon out-of-memory-space.
+  //
+  GetLastGuardedFreePageAddress (&Address);
+  if (Address != 0) {
+    mLastPromotedPage = Address;
+  }
+}
+
+/**
+  This function checks to see if the given memory map descriptor in a memory map
+  can be merged with any guarded free pages.
+
+  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
+  @param  MaxAddress        Maximum address to stop the merge.
+
+  @return VOID
+
+**/
+VOID
+MergeGuardPages (
+  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
+  IN EFI_PHYSICAL_ADDRESS       MaxAddress
+  )
+{
+  EFI_PHYSICAL_ADDRESS        EndAddress;
+  UINT64                      Bitmap;
+  INTN                        Pages;
+
+  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
+      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
+    return;
+  }
+
+  Bitmap = 0;
+  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry->PhysicalStart);
+  Pages -= MemoryMapEntry->NumberOfPages;
+  while (Pages > 0) {
+    if (Bitmap == 0) {
+      EndAddress = MemoryMapEntry->PhysicalStart +
+                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
+      Bitmap = GetGuardedMemoryBits (EndAddress, GUARDED_HEAP_MAP_ENTRY_BITS);
+    }
+
+    if ((Bitmap & 1) == 0) {
+      break;
+    }
+
+    Pages--;
+    MemoryMapEntry->NumberOfPages++;
+    Bitmap = RShiftU64 (Bitmap, 1);
+  }
+}
+
+/**
+  Put part (at most 64 pages a time) guarded free pages back to free page pool.
+
+  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
+  makes use of 'Used then throw away' way to detect any illegal access to freed
+  memory. The thrown-away memory will be marked as not-present so that any access
+  to those memory (after free) will be caught by page-fault exception.
+
+  The problem is that this will consume lots of memory space. Once no memory
+  left in pool to allocate, we have to restore part of the freed pages to their
+  normal function. Otherwise the whole system will stop functioning.
+
+  @param  StartAddress    Start address of promoted memory.
+  @param  EndAddress      End address of promoted memory.
+
+  @return TRUE    Succeeded to promote memory.
+  @return FALSE   No free memory found.
+
+**/
+BOOLEAN
+PromoteGuardedFreePages (
+  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
+  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
+  )
+{
+  EFI_STATUS              Status;
+  UINTN                   AvailablePages;
+  UINT64                  Bitmap;
+  EFI_PHYSICAL_ADDRESS    Start;
+
+  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    return FALSE;
+  }
+
+  //
+  // Similar to memory allocation service, always search the freed pages in
+  // descending direction.
+  //
+  Start           = mLastPromotedPage;
+  AvailablePages  = 0;
+  while (AvailablePages == 0) {
+    Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
+    //
+    // If the address wraps around, try the really freed pages at top.
+    //
+    if (Start > mLastPromotedPage) {
+      GetLastGuardedFreePageAddress (&Start);
+      ASSERT (Start != 0);
+      Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
+    }
+
+    Bitmap = GetGuardedMemoryBits (Start, GUARDED_HEAP_MAP_ENTRY_BITS);
+    while (Bitmap > 0) {
+      if ((Bitmap & 1) != 0) {
+        ++AvailablePages;
+      } else if (AvailablePages == 0) {
+        Start += EFI_PAGES_TO_SIZE (1);
+      } else {
+        break;
+      }
+
+      Bitmap = RShiftU64 (Bitmap, 1);
+    }
+  }
+
+  if (AvailablePages) {
+    DEBUG ((DEBUG_INFO, "Promoted pages: %lX (%lx)\r\n", Start, (UINT64)AvailablePages));
+    ClearGuardedMemoryBits (Start, AvailablePages);
+
+    if (gCpu != NULL) {
+      //
+      // Set flag to make sure allocating memory without GUARD for page table
+      // operation; otherwise infinite loops could be caused.
+      //
+      mOnGuarding = TRUE;
+      Status = gCpu->SetMemoryAttributes (gCpu, Start, EFI_PAGES_TO_SIZE(AvailablePages), 0);
+      ASSERT_EFI_ERROR (Status);
+      mOnGuarding = FALSE;
+    }
+
+    mLastPromotedPage = Start;
+    *StartAddress     = Start;
+    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
+    return TRUE;
+  }
+
+  return FALSE;
+}
+
 /**
   Notify function used to set all Guard pages before CPU Arch Protocol installed.
 **/
@@ -1212,7 +1592,20 @@ HeapGuardCpuArchProtocolNotify (
   )
 {
   ASSERT (gCpu != NULL);
-  SetAllGuardPages ();
+
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL) &&
+      IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    DEBUG ((DEBUG_ERROR, "Heap guard and freed memory guard cannot be enabled at the same time.\n"));
+    CpuDeadLoop ();
+  }
+
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
+    SetAllGuardPages ();
+  }
+
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    GuardAllFreedPages ();
+  }
 }
 
 /**
@@ -1264,6 +1657,10 @@ DumpGuardedMemoryBitmap (
   CHAR8     *Ruler1;
   CHAR8     *Ruler2;
 
+  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_ALL)) {
+    return;
+  }
+
   if (mGuardedMemoryMap == 0 ||
       mMapLevel == 0 ||
       mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
index 8c34692439..55a91ec098 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
@@ -1,7 +1,7 @@
 /** @file
   Data type, macros and function prototypes of heap guard feature.
 
-Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017-2018, Intel Corporation. All rights reserved.<BR>
 This program and the accompanying materials
 are licensed and made available under the terms and conditions of the BSD License
 which accompanies this distribution.  The full text of the license may be found at
@@ -160,6 +160,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 //
 #define GUARD_HEAP_TYPE_PAGE        BIT0
 #define GUARD_HEAP_TYPE_POOL        BIT1
+#define GUARD_HEAP_TYPE_FREED       BIT4
+#define GUARD_HEAP_TYPE_ALL         \
+        (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_FREED)
 
 //
 // Debug message level
@@ -392,11 +395,13 @@ AdjustPoolHeadF (
 /**
   Check to see if the heap guard is enabled for page and/or pool allocation.
 
+  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
+
   @return TRUE/FALSE.
 **/
 BOOLEAN
 IsHeapGuardEnabled (
-  VOID
+  UINT8           GuardType
   );
 
 /**
@@ -407,6 +412,62 @@ HeapGuardCpuArchProtocolNotify (
   VOID
   );
 
+/**
+  This function checks to see if the given memory map descriptor in a memory map
+  can be merged with any guarded free pages.
+
+  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
+  @param  MaxAddress        Maximum address to stop the merge.
+
+  @return VOID
+
+**/
+VOID
+MergeGuardPages (
+  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
+  IN EFI_PHYSICAL_ADDRESS       MaxAddress
+  );
+
+/**
+  Record freed pages as well as mark them as not-present, if enabled.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPagesChecked (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  );
+
+/**
+  Put part (at most 64 pages a time) guarded free pages back to free page pool.
+
+  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
+  makes use of 'Used then throw away' way to detect any illegal access to freed
+  memory. The thrown-away memory will be marked as not-present so that any access
+  to those memory (after free) will be caught by page-fault exception.
+
+  The problem is that this will consume lots of memory space. Once no memory
+  left in pool to allocate, we have to restore part of the freed pages to their
+  normal function. Otherwise the whole system will stop functioning.
+
+  @param  StartAddress    Start address of promoted memory.
+  @param  EndAddress      End address of promoted memory.
+
+  @return TRUE    Succeeded to promote memory.
+  @return FALSE   No free memory found.
+
+**/
+BOOLEAN
+PromoteGuardedFreePages (
+  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
+  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
+  );
+
 extern BOOLEAN mOnGuarding;
 
 #endif
diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c
index 3b4cc08e7c..9d9abcf565 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Page.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
@@ -401,9 +401,11 @@ PromoteMemoryResource (
   VOID
   )
 {
-  LIST_ENTRY         *Link;
-  EFI_GCD_MAP_ENTRY  *Entry;
-  BOOLEAN            Promoted;
+  LIST_ENTRY            *Link;
+  EFI_GCD_MAP_ENTRY     *Entry;
+  BOOLEAN               Promoted;
+  EFI_PHYSICAL_ADDRESS  StartAddress;
+  EFI_PHYSICAL_ADDRESS  EndAddress;
 
   DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
 
@@ -451,6 +453,24 @@ PromoteMemoryResource (
 
   CoreReleaseGcdMemoryLock ();
 
+  if (!Promoted) {
+    //
+    // If freed-memory guard is enabled, we could promote pages from
+    // guarded free pages.
+    //
+    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
+    if (Promoted) {
+      CoreAcquireGcdMemoryLock ();
+      CoreAddRange (
+        EfiConventionalMemory,
+        StartAddress,
+        EndAddress,
+        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT | EFI_MEMORY_WB
+        );
+      CoreReleaseGcdMemoryLock ();
+    }
+  }
+
   return Promoted;
 }
 /**
@@ -896,9 +916,15 @@ CoreConvertPagesEx (
     }
 
     //
-    // Add our new range in
+    // Add our new range in. Don't do this for freed pages if freed-memory
+    // guard is enabled.
     //
-    CoreAddRange (MemType, Start, RangeEnd, Attribute);
+    if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
+        !ChangingType ||
+        MemType != EfiConventionalMemory) {
+      CoreAddRange (MemType, Start, RangeEnd, Attribute);
+    }
+
     if (ChangingType && (MemType == EfiConventionalMemory)) {
       //
       // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
@@ -1514,6 +1540,7 @@ CoreFreePages (
 
   Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
   if (!EFI_ERROR (Status)) {
+    GuardFreedPagesChecked (Memory, NumberOfPages);
     CoreUpdateProfile (
       (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
       MemoryProfileActionFreePages,
@@ -1908,9 +1935,7 @@ Done:
   *MemoryMapSize = BufferSize;
 
   DEBUG_CODE (
-    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
-      DumpGuardedMemoryBitmap ();
-    }
+    DumpGuardedMemoryBitmap ();
   );
 
   return Status;
diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c
index 1ff2061f7f..b9182ea807 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
@@ -1,7 +1,7 @@
 /** @file
   UEFI Memory pool management functions.
 
-Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
 This program and the accompanying materials
 are licensed and made available under the terms and conditions of the BSD License
 which accompanies this distribution.  The full text of the license may be found at
@@ -26,7 +26,8 @@ typedef struct {
 } POOL_FREE;
 
 
-#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
+#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
+#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
 typedef struct {
   UINT32          Signature;
   UINT32          Reserved;
@@ -367,6 +368,7 @@ CoreAllocatePoolI (
   UINTN       NoPages;
   UINTN       Granularity;
   BOOLEAN     HasPoolTail;
+  BOOLEAN     PageAsPool;
 
   ASSERT_LOCKED (&mPoolMemoryLock);
 
@@ -386,6 +388,7 @@ CoreAllocatePoolI (
 
   HasPoolTail  = !(NeedGuard &&
                    ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+  PageAsPool = (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) && !mOnGuarding);
 
   //
   // Adjusting the Size to be of proper alignment so that
@@ -406,7 +409,7 @@ CoreAllocatePoolI (
   // If allocation is over max size, just allocate pages for the request
   // (slow)
   //
-  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
+  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
     if (!HasPoolTail) {
       Size -= sizeof (POOL_TAIL);
     }
@@ -498,7 +501,7 @@ Done:
     //
     // If we have a pool buffer, fill in the header & tail info
     //
-    Head->Signature = POOL_HEAD_SIGNATURE;
+    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE : POOL_HEAD_SIGNATURE;
     Head->Size      = Size;
     Head->Type      = (EFI_MEMORY_TYPE) PoolType;
     Buffer          = Head->Data;
@@ -615,6 +618,7 @@ CoreFreePoolPagesI (
   CoreFreePoolPages (Memory, NoPages);
   CoreReleaseMemoryLock ();
 
+  GuardFreedPagesChecked (Memory, NoPages);
   ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
     (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
 }
@@ -685,15 +689,19 @@ CoreFreePoolI (
   UINTN       Granularity;
   BOOLEAN     IsGuarded;
   BOOLEAN     HasPoolTail;
+  BOOLEAN     PageAsPool;
 
   ASSERT(Buffer != NULL);
   //
   // Get the head & tail of the pool entry
   //
-  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
+  Head = BASE_CR (Buffer, POOL_HEAD, Data);
   ASSERT(Head != NULL);
 
-  if (Head->Signature != POOL_HEAD_SIGNATURE) {
+  if (Head->Signature != POOL_HEAD_SIGNATURE &&
+      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
+    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
+            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
     return EFI_INVALID_PARAMETER;
   }
 
@@ -701,6 +709,7 @@ CoreFreePoolI (
                 IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
   HasPoolTail = !(IsGuarded &&
                   ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
 
   if (HasPoolTail) {
     Tail = HEAD_TO_TAIL (Head);
@@ -757,7 +766,7 @@ CoreFreePoolI (
   //
   // If it's not on the list, it must be pool pages
   //
-  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
+  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
 
     //
     // Return the memory pages back to free memory
diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
index fa8f8fe91a..6298b67db1 100644
--- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
+++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
@@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
   // Don't overwrite Guard pages, which should be the first and/or last page,
   // if any.
   //
-  if (IsHeapGuardEnabled ()) {
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
     if (IsGuardPage (Memory))  {
       Memory += EFI_PAGE_SIZE;
       Length -= EFI_PAGE_SIZE;
diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
index 05eb4f422b..04cfb2dab2 100644
--- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
+++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
@@ -1,7 +1,7 @@
 /** @file
   UEFI PropertiesTable support
 
-Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR>
 This program and the accompanying materials
 are licensed and made available under the terms and conditions of the BSD License
 which accompanies this distribution.  The full text of the license may be found at
@@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 #include <Guid/PropertiesTable.h>
 
 #include "DxeMain.h"
+#include "HeapGuard.h"
 
 #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
   ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
@@ -205,16 +206,13 @@ MergeMemoryMap (
     NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
 
     do {
-      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry->NumberOfPages));
+      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry->PhysicalStart);
+      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry->NumberOfPages));
       if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
-          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
-          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
-          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
-        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
-        if (NewMemoryMapEntry != MemoryMapEntry) {
-          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
-        }
-
+          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
+          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
+          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
+        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
         NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (NextMemoryMapEntry, DescriptorSize);
         continue;
       } else {
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation
  2018-10-24  5:26 ` [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
@ 2018-10-25  2:55   ` Zeng, Star
  2018-10-25  4:21     ` Wang, Jian J
  0 siblings, 1 reply; 16+ messages in thread
From: Zeng, Star @ 2018-10-25  2:55 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel
  Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Laszlo Ersek, star.zeng

On 2018/10/24 13:26, Jian J Wang wrote:
>> v3 changes:
>> a. split from #1 patch of v2
>> b. update title
> 
> This cleanup is meant for avoiding misuse of newly introduced BIT4
> (UAF detection) of PCD PcdHeapGuardPropertyMask, because it applies
> to all types of physical memory. In another words,
> PcdHeapGuardPoolType and PcdHeapGuardPageType don't have effect to
> the BIT4 of PcdHeapGuardPropertyMask.
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>

Reviewed-by: Star Zeng <star.zeng@intel.com>

You may can add Laszlo's RB and even Suggested-by according to Laszlo's 
feedback to V2 patch series.


Thanks,
Star

> ---
>   MdeModulePkg/MdeModulePkg.dec | 4 ++++
>   MdeModulePkg/MdeModulePkg.uni | 2 ++
>   2 files changed, 6 insertions(+)
> 
> diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
> index 6037504fa7..2009dbc5fd 100644
> --- a/MdeModulePkg/MdeModulePkg.dec
> +++ b/MdeModulePkg/MdeModulePkg.dec
> @@ -955,6 +955,8 @@
>     # free pages for all of them. The page allocation for the type related to
>     # cleared bits keeps the same as ususal.
>     #
> +  # This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.
> +  #
>     # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
>     #  EfiReservedMemoryType             0x0000000000000001<BR>
>     #  EfiLoaderCode                     0x0000000000000002<BR>
> @@ -984,6 +986,8 @@
>     # if there's enough free memory for all of them. The pool allocation for the
>     # type related to cleared bits keeps the same as ususal.
>     #
> +  # This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.
> +  #
>     # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
>     #  EfiReservedMemoryType             0x0000000000000001<BR>
>     #  EfiLoaderCode                     0x0000000000000002<BR>
> diff --git a/MdeModulePkg/MdeModulePkg.uni b/MdeModulePkg/MdeModulePkg.uni
> index a6bcb627cf..9d2e473fa9 100644
> --- a/MdeModulePkg/MdeModulePkg.uni
> +++ b/MdeModulePkg/MdeModulePkg.uni
> @@ -1171,6 +1171,7 @@
>                                                                                           " before and after corresponding type of pages allocated if there's enough\n"
>                                                                                           " free pages for all of them. The page allocation for the type related to\n"
>                                                                                           " cleared bits keeps the same as ususal.\n\n"
> +                                                                                        " This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.\n\n"
>                                                                                           " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
>                                                                                           "  EfiReservedMemoryType             0x0000000000000001\n"
>                                                                                           "  EfiLoaderCode                     0x0000000000000002\n"
> @@ -1198,6 +1199,7 @@
>                                                                                           " before and after corresponding type of pages which the allocated pool occupies,\n"
>                                                                                           " if there's enough free memory for all of them. The pool allocation for the\n"
>                                                                                           " type related to cleared bits keeps the same as ususal.\n\n"
> +                                                                                        " This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.\n\n"
>                                                                                           " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
>                                                                                           "  EfiReservedMemoryType             0x0000000000000001\n"
>                                                                                           "  EfiLoaderCode                     0x0000000000000002\n"
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD
  2018-10-24  5:26 ` [PATCH v3 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD Jian J Wang
@ 2018-10-25  3:02   ` Zeng, Star
  2018-10-25  4:23     ` Wang, Jian J
  0 siblings, 1 reply; 16+ messages in thread
From: Zeng, Star @ 2018-10-25  3:02 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel
  Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Laszlo Ersek, star.zeng

On 2018/10/24 13:26, Jian J Wang wrote:
>> v3 changes:
>> a. split from v2 #1 patch file.
>> b. refine the commit message and title.
> 
> UAF (Use-After-Free) memory issue is kind of illegal access to memory
> which has been freed. It can be detected by a new freed-memory guard
> enforced onto freed memory.
> 
> BIT4 of following PCD is used to enable the freed-memory guard feature.
> 
>    gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> 
> Please note this feature is for debug purpose and should not be enabled

Suggest also adding this information into the PCD description.
Pool/page heap guard also has same condition, right?
If yes, we can have a generic sentence for whole PCD.

With this addressed, Reviewed-by: Star Zeng <star.zeng@intel.com>.


Thanks,
Star

> in product BIOS, and cannot be enabled with pool/page heap guard at the
> same time. It's disabled by default.
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>   MdeModulePkg/MdeModulePkg.dec | 6 ++++++
>   MdeModulePkg/MdeModulePkg.uni | 4 +++-
>   2 files changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
> index 2009dbc5fd..255b92ea67 100644
> --- a/MdeModulePkg/MdeModulePkg.dec
> +++ b/MdeModulePkg/MdeModulePkg.dec
> @@ -1011,14 +1011,20 @@
>     gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType|0x0|UINT64|0x30001053
>   
>     ## This mask is to control Heap Guard behavior.
> +  #
>     # Note that due to the limit of pool memory implementation and the alignment
>     # requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee
>     # that the returned pool is exactly adjacent to head guard page or tail guard
>     # page.
> +  #
> +  # Note that UEFI freed-memory guard and pool/page guard cannot be enabled
> +  # at the same time.
> +  #
>     #   BIT0 - Enable UEFI page guard.<BR>
>     #   BIT1 - Enable UEFI pool guard.<BR>
>     #   BIT2 - Enable SMM page guard.<BR>
>     #   BIT3 - Enable SMM pool guard.<BR>
> +  #   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>
>     #   BIT6 - Enable non-stop mode.<BR>
>     #   BIT7 - The direction of Guard Page for Pool Guard.
>     #          0 - The returned pool is near the tail guard page.<BR>
> diff --git a/MdeModulePkg/MdeModulePkg.uni b/MdeModulePkg/MdeModulePkg.uni
> index 9d2e473fa9..e72b893509 100644
> --- a/MdeModulePkg/MdeModulePkg.uni
> +++ b/MdeModulePkg/MdeModulePkg.uni
> @@ -1227,11 +1227,13 @@
>                                                                                               "Note that due to the limit of pool memory implementation and the alignment\n"
>                                                                                               "requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee\n"
>                                                                                               "that the returned pool is exactly adjacent to head guard page or tail guard\n"
> -                                                                                            "page.\n"
> +                                                                                            "page.\n\n"
> +                                                                                            "Note that UEFI freed-memory guard and pool/page guard cannot be enabled at the same time.\n\n"
>                                                                                               "   BIT0 - Enable UEFI page guard.<BR>\n"
>                                                                                               "   BIT1 - Enable UEFI pool guard.<BR>\n"
>                                                                                               "   BIT2 - Enable SMM page guard.<BR>\n"
>                                                                                               "   BIT3 - Enable SMM pool guard.<BR>\n"
> +                                                                                            "   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>\n"
>                                                                                               "   BIT7 - The direction of Guard Page for Pool Guard.\n"
>                                                                                               "          0 - The returned pool is near the tail guard page.<BR>\n"
>                                                                                               "          1 - The returned pool is near the head guard page.<BR>"
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock
  2018-10-24  5:26 ` [PATCH v3 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock Jian J Wang
@ 2018-10-25  3:22   ` Zeng, Star
  2018-10-25  4:24     ` Wang, Jian J
  0 siblings, 1 reply; 16+ messages in thread
From: Zeng, Star @ 2018-10-25  3:22 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel
  Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Laszlo Ersek, star.zeng

Some minor comments are added below.
With them addressed, Reviewed-by: Star Zeng <star.zeng@intel.com>.

On 2018/10/24 13:26, Jian J Wang wrote:
>> v3 changes:
>> a. drop the changes to CoreGetIoSpaceMap() because it won't cause
>>     problem
>> b. refine the logic in changes of CoreGetMemorySpaceMap()
>>     and add more comments
> 
> This issue is hidden in current code but exposed by introduction
> of freed-memory guard feature due to the fact that the feature
> will turn all pool allocation to page allocation.
> 
> The solution is move the memory allocation in CoreGetMemorySpaceMap()

Use 'moving' instead of 'move'?

> and CoreGetIoSpaceMap() to be out of the GCD memory map lock.

Please remove CoreGetIoSpaceMap() as the code does not touch it at all.

> 
> Although it's rare, a while-loop is added to make sure that the count
> of descriptor of memory map is the same after memory allocation, because
> it's executed outside the GCD memory lock.
> 
>     CoreDumpGcdMemorySpaceMap()
> => CoreGetMemorySpaceMap()
> => CoreAcquireGcdMemoryLock () *
>     AllocatePool()
> => InternalAllocatePool()
> => CoreAllocatePool()
> => CoreAllocatePoolI()
> => CoreAllocatePoolPagesI()
> => CoreAllocatePoolPages()
> => FindFreePages()
> => PromoteMemoryResource()
> => CoreAcquireGcdMemoryLock()  **
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>   MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 80 +++++++++++++++++++++++++++--------------
>   1 file changed, 54 insertions(+), 26 deletions(-)
> 
> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> index d9c65a8045..bc02945721 100644
> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> @@ -1691,10 +1691,10 @@ CoreGetMemorySpaceMap (
>     OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
>     )
>   {
> -  EFI_STATUS                       Status;
>     LIST_ENTRY                       *Link;
>     EFI_GCD_MAP_ENTRY                *Entry;
>     EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
> +  UINTN                            DescriptorCount;
>   
>     //
>     // Make sure parameters are valid
> @@ -1706,38 +1706,66 @@ CoreGetMemorySpaceMap (
>       return EFI_INVALID_PARAMETER;
>     }
>   
> +  *NumberOfDescriptors  = 0;
> +  *MemorySpaceMap       = NULL;
> +
>     CoreAcquireGcdMemoryLock ();

Better to add comment for it like below quoted from the example at 
https://lists.01.org/pipermail/edk2-devel/2018-October/031279.html given 
by Laszlo.

//
// Take the lock, for entering the loop with the lock held.
//

>   
> -  //
> -  // Count the number of descriptors
> -  //
> -  *NumberOfDescriptors = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
> +  while (TRUE) {
> +    //
> +    // Count the number of descriptors. It might be done more than once because

Use 'count' instead of 'number' to match the variable name?

> +    // there's code which has to be running outside the GCD lock.

Use "AllocatePool() calling code below" instead of "there's code"?

> +    //
> +    DescriptorCount = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
> +    if (DescriptorCount == *NumberOfDescriptors) {
> +      //
> +      // Fill in the MemorySpaceMap if no memory space map change.
> +      //
> +      Descriptor = *MemorySpaceMap;
> +      Link = mGcdMemorySpaceMap.ForwardLink;
> +      while (Link != &mGcdMemorySpaceMap) {
> +        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
> +        BuildMemoryDescriptor (Descriptor, Entry);
> +        Descriptor++;
> +        Link = Link->ForwardLink;
> +      }
>   
> -  //
> -  // Allocate the MemorySpaceMap
> -  //
> -  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
> -  if (*MemorySpaceMap == NULL) {
> -    Status = EFI_OUT_OF_RESOURCES;
> -    goto Done;
> -  }
> +      break;
> +    }
>   
> -  //
> -  // Fill in the MemorySpaceMap
> -  //
> -  Descriptor = *MemorySpaceMap;
> -  Link = mGcdMemorySpaceMap.ForwardLink;
> -  while (Link != &mGcdMemorySpaceMap) {
> -    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
> -    BuildMemoryDescriptor (Descriptor, Entry);
> -    Descriptor++;
> -    Link = Link->ForwardLink;
> +    //
> +    // Release the lock before memory allocation, because it might cause
> +    // GCD lock conflict in one of calling path in AllocatPool().
> +    //
> +    CoreReleaseGcdMemoryLock ();
> +
> +    //
> +    // Allocate memory to store the MemorySpaceMap. Note it might be already
> +    // allocated if there's map descriptor change during memory allocation at
> +    // last time.
> +    //
> +    if (*MemorySpaceMap != NULL) {
> +      FreePool (*MemorySpaceMap);
> +    }
> +
> +    *MemorySpaceMap = AllocatePool (DescriptorCount *
> +                                    sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
> +    if (*MemorySpaceMap == NULL) {
> +      *NumberOfDescriptors = 0;
> +      return EFI_OUT_OF_RESOURCES;
> +    }
> +
> +    //
> +    // Save the descriptor number got before for another round of check to make

Use 'count' instead of 'number' to match the variable name?

> +    // sure we won't miss any, since we have code running outside the GCD lock.
> +    //
> +    *NumberOfDescriptors = DescriptorCount;
> +    CoreAcquireGcdMemoryLock ();

Better to add comment for it like below quoted from the example at 
https://lists.01.org/pipermail/edk2-devel/2018-October/031279.html given 
by Laszlo.

//
// Re-acquire the lock, for the next iteration.
//

>     }
> -  Status = EFI_SUCCESS;
>   
> -Done:
>     CoreReleaseGcdMemoryLock ();

Better to add comment for it like below quoted from the example at 
https://lists.01.org/pipermail/edk2-devel/2018-October/031279.html given 
by Laszlo.

//
// We exited the loop with the lock held, release it.
//


Thanks,
Star

> -  return Status;
> +
> +  return EFI_SUCCESS;
>   }
>   
>   
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature
  2018-10-24  5:26 ` [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
@ 2018-10-25  3:37   ` Zeng, Star
  2018-10-25  4:29     ` Wang, Jian J
  2018-10-25  6:28     ` Wang, Jian J
  0 siblings, 2 replies; 16+ messages in thread
From: Zeng, Star @ 2018-10-25  3:37 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel
  Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Laszlo Ersek, star.zeng

On 2018/10/24 13:26, Jian J Wang wrote:
>> v3 changes:
>> a. Merge from #4 and #5 of v2 patch
>> b. Coding style cleanup
> 
> Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
> which is illegal access to memory which has been freed. The principle
> behind is similar to heap guard feature, that is we'll turn all pool

Since we also regard UAF part of heap guard feature, better to use 
"pool/page heap guard feature" instead of "heap guard feature" here.

I quoted a piece of code at below and have question that why it uses 
hard code Attribute parameter?

+      CoreAddRange (
+        EfiConventionalMemory,
+        StartAddress,
+        EndAddress,
+        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT | EFI_MEMORY_WB
+        );

Thanks,
Star

> memory allocation to page allocation and mark them to be not-present
> once they are freed. The freed memory block will not be added back into
> memory pool.
> 
> This means that, once a page is allocated and freed, it cannot be
> re-allocated. This will bring an issue, which is that there's
> risk that memory space will be used out. To address it, the memory
> service add logic to put part (at most 64 pages a time) of freed pages
> back into page pool, so that the memory service can still have memory
> to allocate, when all memory space have been allocated once. This is
> called memory promotion. The promoted pages are always from the eldest
> pages which haven been freed.
> 
> This feature brings another problem is that memory map descriptors will
> be increased enormously (200+ -> 2000+). One of change in this patch
> is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
> freed pages back into the memory map. Now the number can stay at around
> 510.
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409 +++++++++++++++++++++++++-
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
>   MdeModulePkg/Core/Dxe/Mem/Page.c              |  41 ++-
>   MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
>   MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
>   MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
>   6 files changed, 524 insertions(+), 34 deletions(-)
> 
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> index 663f969c0d..449a022658 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
>   GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
>                                       = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>   
> +//
> +// Used for promoting freed but not used pages.
> +//
> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS mLastPromotedPage = BASE_4GB;
> +
>   /**
>     Set corresponding bits in bitmap table to 1 according to the address.
>   
> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
>   
>     @return An integer containing the guarded memory bitmap.
>   **/
> -UINTN
> +UINT64
>   GetGuardedMemoryBits (
>     IN EFI_PHYSICAL_ADDRESS    Address,
>     IN UINTN                   NumberOfPages
> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
>   {
>     UINT64            *BitMap;
>     UINTN             Bits;
> -  UINTN             Result;
> +  UINT64            Result;
>     UINTN             Shift;
>     UINTN             BitsToUnitEnd;
>   
> @@ -660,15 +665,16 @@ IsPageTypeToGuard (
>   /**
>     Check to see if the heap guard is enabled for page and/or pool allocation.
>   
> +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> +
>     @return TRUE/FALSE.
>   **/
>   BOOLEAN
>   IsHeapGuardEnabled (
> -  VOID
> +  UINT8           GuardType
>     )
>   {
> -  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> -                              GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
> +  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages, GuardType);
>   }
>   
>   /**
> @@ -1203,6 +1209,380 @@ SetAllGuardPages (
>     }
>   }
>   
> +/**
> +  Find the address of top-most guarded free page.
> +
> +  @param[out]  Address    Start address of top-most guarded free page.
> +
> +  @return VOID.
> +**/
> +VOID
> +GetLastGuardedFreePageAddress (
> +  OUT EFI_PHYSICAL_ADDRESS      *Address
> +  )
> +{
> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> +  UINTN                   Level;
> +  UINT64                  Map;
> +  INTN                    Index;
> +
> +  ASSERT (mMapLevel >= 1);
> +
> +  BaseAddress = 0;
> +  Map = mGuardedMemoryMap;
> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> +       ++Level) {
> +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> +
> +    //
> +    // Find the non-NULL entry at largest index.
> +    //
> +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> +        Map = ((UINT64 *)(UINTN)Map)[Index];
> +        break;
> +      }
> +    }
> +  }
> +
> +  //
> +  // Find the non-zero MSB then get the page address.
> +  //
> +  while (Map != 0) {
> +    Map = RShiftU64 (Map, 1);
> +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> +  }
> +
> +  *Address = BaseAddress;
> +}
> +
> +/**
> +  Record freed pages.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +MarkFreedPages (
> +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> +  IN UINTN                    Pages
> +  )
> +{
> +  SetGuardedMemoryBits (BaseAddress, Pages);
> +}
> +
> +/**
> +  Record freed pages as well as mark them as not-present.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPages (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  )
> +{
> +  EFI_STATUS      Status;
> +
> +  //
> +  // Legacy memory lower than 1MB might be accessed with no allocation. Leave
> +  // them alone.
> +  //
> +  if (BaseAddress < BASE_1MB) {
> +    return;
> +  }
> +
> +  MarkFreedPages (BaseAddress, Pages);
> +  if (gCpu != NULL) {
> +    //
> +    // Set flag to make sure allocating memory without GUARD for page table
> +    // operation; otherwise infinite loops could be caused.
> +    //
> +    mOnGuarding = TRUE;
> +    //
> +    // Note: This might overwrite other attributes needed by other features,
> +    // such as NX memory protection.
> +    //
> +    Status = gCpu->SetMemoryAttributes (
> +                     gCpu,
> +                     BaseAddress,
> +                     EFI_PAGES_TO_SIZE (Pages),
> +                     EFI_MEMORY_RP
> +                     );
> +    //
> +    // Normally we should ASSERT the returned Status. But there might be memory
> +    // alloc/free involved in SetMemoryAttributes(), which might fail this
> +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> +    //
> +    if (EFI_ERROR (Status)) {
> +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%lu)\n", BaseAddress, (UINT64)Pages));
> +    }
> +    mOnGuarding = FALSE;
> +  }
> +}
> +
> +/**
> +  Record freed pages as well as mark them as not-present, if enabled.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPagesChecked (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  )
> +{
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> +    GuardFreedPages (BaseAddress, Pages);
> +  }
> +}
> +
> +/**
> +  Mark all pages freed before CPU Arch Protocol as not-present.
> +
> +**/
> +VOID
> +GuardAllFreedPages (
> +  VOID
> +  )
> +{
> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    TableEntry;
> +  UINT64    Address;
> +  UINT64    GuardPage;
> +  INTN      Level;
> +  UINTN     BitIndex;
> +  UINTN     GuardPageNumber;
> +
> +  if (mGuardedMemoryMap == 0 ||
> +      mMapLevel == 0 ||
> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> +    return;
> +  }
> +
> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> +
> +  SetMem (Tables, sizeof(Tables), 0);
> +  SetMem (Addresses, sizeof(Addresses), 0);
> +  SetMem (Indices, sizeof(Indices), 0);
> +
> +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +  Tables[Level]   = mGuardedMemoryMap;
> +  Address         = 0;
> +  GuardPage       = (UINT64)-1;
> +  GuardPageNumber = 0;
> +
> +  while (TRUE) {
> +    if (Indices[Level] > Entries[Level]) {
> +      Tables[Level] = 0;
> +      Level        -= 1;
> +    } else {
> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> +      Address     = Addresses[Level];
> +
> +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> +        Level            += 1;
> +        Tables[Level]     = TableEntry;
> +        Addresses[Level]  = Address;
> +        Indices[Level]    = 0;
> +
> +        continue;
> +      } else {
> +        BitIndex = 1;
> +        while (BitIndex != 0) {
> +          if ((TableEntry & BitIndex) != 0) {
> +            if (GuardPage == (UINT64)-1) {
> +              GuardPage = Address;
> +            }
> +            ++GuardPageNumber;
> +          } else if (GuardPageNumber > 0) {
> +            GuardFreedPages (GuardPage, GuardPageNumber);
> +            GuardPageNumber = 0;
> +            GuardPage       = (UINT64)-1;
> +          }
> +
> +          if (TableEntry == 0) {
> +            break;
> +          }
> +
> +          Address += EFI_PAGES_TO_SIZE (1);
> +          BitIndex = LShiftU64 (BitIndex, 1);
> +        }
> +      }
> +    }
> +
> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> +      break;
> +    }
> +
> +    Indices[Level] += 1;
> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> +    Addresses[Level] = Address | LShiftU64 (Indices[Level], Shifts[Level]);
> +
> +  }
> +
> +  //
> +  // Update the maximum address of freed page which can be used for memory
> +  // promotion upon out-of-memory-space.
> +  //
> +  GetLastGuardedFreePageAddress (&Address);
> +  if (Address != 0) {
> +    mLastPromotedPage = Address;
> +  }
> +}
> +
> +/**
> +  This function checks to see if the given memory map descriptor in a memory map
> +  can be merged with any guarded free pages.
> +
> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> +  @param  MaxAddress        Maximum address to stop the merge.
> +
> +  @return VOID
> +
> +**/
> +VOID
> +MergeGuardPages (
> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> +  )
> +{
> +  EFI_PHYSICAL_ADDRESS        EndAddress;
> +  UINT64                      Bitmap;
> +  INTN                        Pages;
> +
> +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> +    return;
> +  }
> +
> +  Bitmap = 0;
> +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry->PhysicalStart);
> +  Pages -= MemoryMapEntry->NumberOfPages;
> +  while (Pages > 0) {
> +    if (Bitmap == 0) {
> +      EndAddress = MemoryMapEntry->PhysicalStart +
> +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> +      Bitmap = GetGuardedMemoryBits (EndAddress, GUARDED_HEAP_MAP_ENTRY_BITS);
> +    }
> +
> +    if ((Bitmap & 1) == 0) {
> +      break;
> +    }
> +
> +    Pages--;
> +    MemoryMapEntry->NumberOfPages++;
> +    Bitmap = RShiftU64 (Bitmap, 1);
> +  }
> +}
> +
> +/**
> +  Put part (at most 64 pages a time) guarded free pages back to free page pool.
> +
> +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
> +  makes use of 'Used then throw away' way to detect any illegal access to freed
> +  memory. The thrown-away memory will be marked as not-present so that any access
> +  to those memory (after free) will be caught by page-fault exception.
> +
> +  The problem is that this will consume lots of memory space. Once no memory
> +  left in pool to allocate, we have to restore part of the freed pages to their
> +  normal function. Otherwise the whole system will stop functioning.
> +
> +  @param  StartAddress    Start address of promoted memory.
> +  @param  EndAddress      End address of promoted memory.
> +
> +  @return TRUE    Succeeded to promote memory.
> +  @return FALSE   No free memory found.
> +
> +**/
> +BOOLEAN
> +PromoteGuardedFreePages (
> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> +  )
> +{
> +  EFI_STATUS              Status;
> +  UINTN                   AvailablePages;
> +  UINT64                  Bitmap;
> +  EFI_PHYSICAL_ADDRESS    Start;
> +
> +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> +    return FALSE;
> +  }
> +
> +  //
> +  // Similar to memory allocation service, always search the freed pages in
> +  // descending direction.
> +  //
> +  Start           = mLastPromotedPage;
> +  AvailablePages  = 0;
> +  while (AvailablePages == 0) {
> +    Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> +    //
> +    // If the address wraps around, try the really freed pages at top.
> +    //
> +    if (Start > mLastPromotedPage) {
> +      GetLastGuardedFreePageAddress (&Start);
> +      ASSERT (Start != 0);
> +      Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> +    }
> +
> +    Bitmap = GetGuardedMemoryBits (Start, GUARDED_HEAP_MAP_ENTRY_BITS);
> +    while (Bitmap > 0) {
> +      if ((Bitmap & 1) != 0) {
> +        ++AvailablePages;
> +      } else if (AvailablePages == 0) {
> +        Start += EFI_PAGES_TO_SIZE (1);
> +      } else {
> +        break;
> +      }
> +
> +      Bitmap = RShiftU64 (Bitmap, 1);
> +    }
> +  }
> +
> +  if (AvailablePages) {
> +    DEBUG ((DEBUG_INFO, "Promoted pages: %lX (%lx)\r\n", Start, (UINT64)AvailablePages));
> +    ClearGuardedMemoryBits (Start, AvailablePages);
> +
> +    if (gCpu != NULL) {
> +      //
> +      // Set flag to make sure allocating memory without GUARD for page table
> +      // operation; otherwise infinite loops could be caused.
> +      //
> +      mOnGuarding = TRUE;
> +      Status = gCpu->SetMemoryAttributes (gCpu, Start, EFI_PAGES_TO_SIZE(AvailablePages), 0);
> +      ASSERT_EFI_ERROR (Status);
> +      mOnGuarding = FALSE;
> +    }
> +
> +    mLastPromotedPage = Start;
> +    *StartAddress     = Start;
> +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> +    return TRUE;
> +  }
> +
> +  return FALSE;
> +}
> +
>   /**
>     Notify function used to set all Guard pages before CPU Arch Protocol installed.
>   **/
> @@ -1212,7 +1592,20 @@ HeapGuardCpuArchProtocolNotify (
>     )
>   {
>     ASSERT (gCpu != NULL);
> -  SetAllGuardPages ();
> +
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL) &&
> +      IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> +    DEBUG ((DEBUG_ERROR, "Heap guard and freed memory guard cannot be enabled at the same time.\n"));
> +    CpuDeadLoop ();
> +  }
> +
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> +    SetAllGuardPages ();
> +  }
> +
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> +    GuardAllFreedPages ();
> +  }
>   }
>   
>   /**
> @@ -1264,6 +1657,10 @@ DumpGuardedMemoryBitmap (
>     CHAR8     *Ruler1;
>     CHAR8     *Ruler2;
>   
> +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_ALL)) {
> +    return;
> +  }
> +
>     if (mGuardedMemoryMap == 0 ||
>         mMapLevel == 0 ||
>         mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> index 8c34692439..55a91ec098 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> @@ -1,7 +1,7 @@
>   /** @file
>     Data type, macros and function prototypes of heap guard feature.
>   
> -Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017-2018, Intel Corporation. All rights reserved.<BR>
>   This program and the accompanying materials
>   are licensed and made available under the terms and conditions of the BSD License
>   which accompanies this distribution.  The full text of the license may be found at
> @@ -160,6 +160,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>   //
>   #define GUARD_HEAP_TYPE_PAGE        BIT0
>   #define GUARD_HEAP_TYPE_POOL        BIT1
> +#define GUARD_HEAP_TYPE_FREED       BIT4
> +#define GUARD_HEAP_TYPE_ALL         \
> +        (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_FREED)
>   
>   //
>   // Debug message level
> @@ -392,11 +395,13 @@ AdjustPoolHeadF (
>   /**
>     Check to see if the heap guard is enabled for page and/or pool allocation.
>   
> +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> +
>     @return TRUE/FALSE.
>   **/
>   BOOLEAN
>   IsHeapGuardEnabled (
> -  VOID
> +  UINT8           GuardType
>     );
>   
>   /**
> @@ -407,6 +412,62 @@ HeapGuardCpuArchProtocolNotify (
>     VOID
>     );
>   
> +/**
> +  This function checks to see if the given memory map descriptor in a memory map
> +  can be merged with any guarded free pages.
> +
> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> +  @param  MaxAddress        Maximum address to stop the merge.
> +
> +  @return VOID
> +
> +**/
> +VOID
> +MergeGuardPages (
> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> +  );
> +
> +/**
> +  Record freed pages as well as mark them as not-present, if enabled.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPagesChecked (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  );
> +
> +/**
> +  Put part (at most 64 pages a time) guarded free pages back to free page pool.
> +
> +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
> +  makes use of 'Used then throw away' way to detect any illegal access to freed
> +  memory. The thrown-away memory will be marked as not-present so that any access
> +  to those memory (after free) will be caught by page-fault exception.
> +
> +  The problem is that this will consume lots of memory space. Once no memory
> +  left in pool to allocate, we have to restore part of the freed pages to their
> +  normal function. Otherwise the whole system will stop functioning.
> +
> +  @param  StartAddress    Start address of promoted memory.
> +  @param  EndAddress      End address of promoted memory.
> +
> +  @return TRUE    Succeeded to promote memory.
> +  @return FALSE   No free memory found.
> +
> +**/
> +BOOLEAN
> +PromoteGuardedFreePages (
> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> +  );
> +
>   extern BOOLEAN mOnGuarding;
>   
>   #endif
> diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c
> index 3b4cc08e7c..9d9abcf565 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> @@ -401,9 +401,11 @@ PromoteMemoryResource (
>     VOID
>     )
>   {
> -  LIST_ENTRY         *Link;
> -  EFI_GCD_MAP_ENTRY  *Entry;
> -  BOOLEAN            Promoted;
> +  LIST_ENTRY            *Link;
> +  EFI_GCD_MAP_ENTRY     *Entry;
> +  BOOLEAN               Promoted;
> +  EFI_PHYSICAL_ADDRESS  StartAddress;
> +  EFI_PHYSICAL_ADDRESS  EndAddress;
>   
>     DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
>   
> @@ -451,6 +453,24 @@ PromoteMemoryResource (
>   
>     CoreReleaseGcdMemoryLock ();
>   
> +  if (!Promoted) {
> +    //
> +    // If freed-memory guard is enabled, we could promote pages from
> +    // guarded free pages.
> +    //
> +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> +    if (Promoted) {
> +      CoreAcquireGcdMemoryLock ();
> +      CoreAddRange (
> +        EfiConventionalMemory,
> +        StartAddress,
> +        EndAddress,
> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT | EFI_MEMORY_WB
> +        );
> +      CoreReleaseGcdMemoryLock ();
> +    }
> +  }
> +
>     return Promoted;
>   }
>   /**
> @@ -896,9 +916,15 @@ CoreConvertPagesEx (
>       }
>   
>       //
> -    // Add our new range in
> +    // Add our new range in. Don't do this for freed pages if freed-memory
> +    // guard is enabled.
>       //
> -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> +    if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> +        !ChangingType ||
> +        MemType != EfiConventionalMemory) {
> +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> +    }
> +
>       if (ChangingType && (MemType == EfiConventionalMemory)) {
>         //
>         // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
> @@ -1514,6 +1540,7 @@ CoreFreePages (
>   
>     Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
>     if (!EFI_ERROR (Status)) {
> +    GuardFreedPagesChecked (Memory, NumberOfPages);
>       CoreUpdateProfile (
>         (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
>         MemoryProfileActionFreePages,
> @@ -1908,9 +1935,7 @@ Done:
>     *MemoryMapSize = BufferSize;
>   
>     DEBUG_CODE (
> -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> -      DumpGuardedMemoryBitmap ();
> -    }
> +    DumpGuardedMemoryBitmap ();
>     );
>   
>     return Status;
> diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> index 1ff2061f7f..b9182ea807 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> @@ -1,7 +1,7 @@
>   /** @file
>     UEFI Memory pool management functions.
>   
> -Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
>   This program and the accompanying materials
>   are licensed and made available under the terms and conditions of the BSD License
>   which accompanies this distribution.  The full text of the license may be found at
> @@ -26,7 +26,8 @@ typedef struct {
>   } POOL_FREE;
>   
>   
> -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
>   typedef struct {
>     UINT32          Signature;
>     UINT32          Reserved;
> @@ -367,6 +368,7 @@ CoreAllocatePoolI (
>     UINTN       NoPages;
>     UINTN       Granularity;
>     BOOLEAN     HasPoolTail;
> +  BOOLEAN     PageAsPool;
>   
>     ASSERT_LOCKED (&mPoolMemoryLock);
>   
> @@ -386,6 +388,7 @@ CoreAllocatePoolI (
>   
>     HasPoolTail  = !(NeedGuard &&
>                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +  PageAsPool = (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) && !mOnGuarding);
>   
>     //
>     // Adjusting the Size to be of proper alignment so that
> @@ -406,7 +409,7 @@ CoreAllocatePoolI (
>     // If allocation is over max size, just allocate pages for the request
>     // (slow)
>     //
> -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
>       if (!HasPoolTail) {
>         Size -= sizeof (POOL_TAIL);
>       }
> @@ -498,7 +501,7 @@ Done:
>       //
>       // If we have a pool buffer, fill in the header & tail info
>       //
> -    Head->Signature = POOL_HEAD_SIGNATURE;
> +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE : POOL_HEAD_SIGNATURE;
>       Head->Size      = Size;
>       Head->Type      = (EFI_MEMORY_TYPE) PoolType;
>       Buffer          = Head->Data;
> @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
>     CoreFreePoolPages (Memory, NoPages);
>     CoreReleaseMemoryLock ();
>   
> +  GuardFreedPagesChecked (Memory, NoPages);
>     ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
>       (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
>   }
> @@ -685,15 +689,19 @@ CoreFreePoolI (
>     UINTN       Granularity;
>     BOOLEAN     IsGuarded;
>     BOOLEAN     HasPoolTail;
> +  BOOLEAN     PageAsPool;
>   
>     ASSERT(Buffer != NULL);
>     //
>     // Get the head & tail of the pool entry
>     //
> -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
>     ASSERT(Head != NULL);
>   
> -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>       return EFI_INVALID_PARAMETER;
>     }
>   
> @@ -701,6 +709,7 @@ CoreFreePoolI (
>                   IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
>     HasPoolTail = !(IsGuarded &&
>                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>   
>     if (HasPoolTail) {
>       Tail = HEAD_TO_TAIL (Head);
> @@ -757,7 +766,7 @@ CoreFreePoolI (
>     //
>     // If it's not on the list, it must be pool pages
>     //
> -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
>   
>       //
>       // Return the memory pages back to free memory
> diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> index fa8f8fe91a..6298b67db1 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> @@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
>     // Don't overwrite Guard pages, which should be the first and/or last page,
>     // if any.
>     //
> -  if (IsHeapGuardEnabled ()) {
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
>       if (IsGuardPage (Memory))  {
>         Memory += EFI_PAGE_SIZE;
>         Length -= EFI_PAGE_SIZE;
> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> index 05eb4f422b..04cfb2dab2 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> @@ -1,7 +1,7 @@
>   /** @file
>     UEFI PropertiesTable support
>   
> -Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR>
>   This program and the accompanying materials
>   are licensed and made available under the terms and conditions of the BSD License
>   which accompanies this distribution.  The full text of the license may be found at
> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>   #include <Guid/PropertiesTable.h>
>   
>   #include "DxeMain.h"
> +#include "HeapGuard.h"
>   
>   #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
>     ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> @@ -205,16 +206,13 @@ MergeMemoryMap (
>       NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
>   
>       do {
> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry->NumberOfPages));
> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry->PhysicalStart);
> +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry->NumberOfPages));
>         if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        if (NewMemoryMapEntry != MemoryMapEntry) {
> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        }
> -
> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
>           NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (NextMemoryMapEntry, DescriptorSize);
>           continue;
>         } else {
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation
  2018-10-25  2:55   ` Zeng, Star
@ 2018-10-25  4:21     ` Wang, Jian J
  0 siblings, 0 replies; 16+ messages in thread
From: Wang, Jian J @ 2018-10-25  4:21 UTC (permalink / raw)
  To: Zeng, Star, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek

Got it. Thanks.

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Thursday, October 25, 2018 10:56 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard
> pool/page type PCD documentation
> 
> On 2018/10/24 13:26, Jian J Wang wrote:
> >> v3 changes:
> >> a. split from #1 patch of v2
> >> b. update title
> >
> > This cleanup is meant for avoiding misuse of newly introduced BIT4
> > (UAF detection) of PCD PcdHeapGuardPropertyMask, because it applies
> > to all types of physical memory. In another words,
> > PcdHeapGuardPoolType and PcdHeapGuardPageType don't have effect to
> > the BIT4 of PcdHeapGuardPropertyMask.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> 
> Reviewed-by: Star Zeng <star.zeng@intel.com>
> 
> You may can add Laszlo's RB and even Suggested-by according to Laszlo's
> feedback to V2 patch series.
> 
> 
> Thanks,
> Star
> 
> > ---
> >   MdeModulePkg/MdeModulePkg.dec | 4 ++++
> >   MdeModulePkg/MdeModulePkg.uni | 2 ++
> >   2 files changed, 6 insertions(+)
> >
> > diff --git a/MdeModulePkg/MdeModulePkg.dec
> b/MdeModulePkg/MdeModulePkg.dec
> > index 6037504fa7..2009dbc5fd 100644
> > --- a/MdeModulePkg/MdeModulePkg.dec
> > +++ b/MdeModulePkg/MdeModulePkg.dec
> > @@ -955,6 +955,8 @@
> >     # free pages for all of them. The page allocation for the type related to
> >     # cleared bits keeps the same as ususal.
> >     #
> > +  # This PCD is only valid if BIT0 and/or BIT2 are set in
> PcdHeapGuardPropertyMask.
> > +  #
> >     # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
> >     #  EfiReservedMemoryType             0x0000000000000001<BR>
> >     #  EfiLoaderCode                     0x0000000000000002<BR>
> > @@ -984,6 +986,8 @@
> >     # if there's enough free memory for all of them. The pool allocation for the
> >     # type related to cleared bits keeps the same as ususal.
> >     #
> > +  # This PCD is only valid if BIT1 and/or BIT3 are set in
> PcdHeapGuardPropertyMask.
> > +  #
> >     # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
> >     #  EfiReservedMemoryType             0x0000000000000001<BR>
> >     #  EfiLoaderCode                     0x0000000000000002<BR>
> > diff --git a/MdeModulePkg/MdeModulePkg.uni
> b/MdeModulePkg/MdeModulePkg.uni
> > index a6bcb627cf..9d2e473fa9 100644
> > --- a/MdeModulePkg/MdeModulePkg.uni
> > +++ b/MdeModulePkg/MdeModulePkg.uni
> > @@ -1171,6 +1171,7 @@
> >                                                                                           " before and after
> corresponding type of pages allocated if there's enough\n"
> >                                                                                           " free pages for all of them.
> The page allocation for the type related to\n"
> >                                                                                           " cleared bits keeps the same
> as ususal.\n\n"
> > +                                                                                        " This PCD is only valid if BIT0
> and/or BIT2 are set in PcdHeapGuardPropertyMask.\n\n"
> >                                                                                           " Below is bit mask for this
> PCD: (Order is same as UEFI spec)<BR>\n"
> >                                                                                           "  EfiReservedMemoryType
> 0x0000000000000001\n"
> >                                                                                           "  EfiLoaderCode
> 0x0000000000000002\n"
> > @@ -1198,6 +1199,7 @@
> >                                                                                           " before and after
> corresponding type of pages which the allocated pool occupies,\n"
> >                                                                                           " if there's enough free
> memory for all of them. The pool allocation for the\n"
> >                                                                                           " type related to cleared bits
> keeps the same as ususal.\n\n"
> > +                                                                                        " This PCD is only valid if BIT1
> and/or BIT3 are set in PcdHeapGuardPropertyMask.\n\n"
> >                                                                                           " Below is bit mask for this
> PCD: (Order is same as UEFI spec)<BR>\n"
> >                                                                                           "  EfiReservedMemoryType
> 0x0000000000000001\n"
> >                                                                                           "  EfiLoaderCode
> 0x0000000000000002\n"
> >


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD
  2018-10-25  3:02   ` Zeng, Star
@ 2018-10-25  4:23     ` Wang, Jian J
  0 siblings, 0 replies; 16+ messages in thread
From: Wang, Jian J @ 2018-10-25  4:23 UTC (permalink / raw)
  To: Zeng, Star, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek

Star,

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Thursday, October 25, 2018 11:02 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH v3 2/6] MdeModulePkg: introduce UEFI freed-
> memory guard bit in HeapGuard PCD
> 
> On 2018/10/24 13:26, Jian J Wang wrote:
> >> v3 changes:
> >> a. split from v2 #1 patch file.
> >> b. refine the commit message and title.
> >
> > UAF (Use-After-Free) memory issue is kind of illegal access to memory
> > which has been freed. It can be detected by a new freed-memory guard
> > enforced onto freed memory.
> >
> > BIT4 of following PCD is used to enable the freed-memory guard feature.
> >
> >    gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> >
> > Please note this feature is for debug purpose and should not be enabled
> 
> Suggest also adding this information into the PCD description.
> Pool/page heap guard also has same condition, right?
> If yes, we can have a generic sentence for whole PCD.
> 
> With this addressed, Reviewed-by: Star Zeng <star.zeng@intel.com>.
> 

Sure. I'll update the dec/uni file with it. Thanks.

> 
> Thanks,
> Star
> 
> > in product BIOS, and cannot be enabled with pool/page heap guard at the
> > same time. It's disabled by default.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >   MdeModulePkg/MdeModulePkg.dec | 6 ++++++
> >   MdeModulePkg/MdeModulePkg.uni | 4 +++-
> >   2 files changed, 9 insertions(+), 1 deletion(-)
> >
> > diff --git a/MdeModulePkg/MdeModulePkg.dec
> b/MdeModulePkg/MdeModulePkg.dec
> > index 2009dbc5fd..255b92ea67 100644
> > --- a/MdeModulePkg/MdeModulePkg.dec
> > +++ b/MdeModulePkg/MdeModulePkg.dec
> > @@ -1011,14 +1011,20 @@
> >
> gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType|0x0|UINT64|0x30
> 001053
> >
> >     ## This mask is to control Heap Guard behavior.
> > +  #
> >     # Note that due to the limit of pool memory implementation and the
> alignment
> >     # requirement of UEFI spec, BIT7 is a try-best setting which cannot
> guarantee
> >     # that the returned pool is exactly adjacent to head guard page or tail guard
> >     # page.
> > +  #
> > +  # Note that UEFI freed-memory guard and pool/page guard cannot be
> enabled
> > +  # at the same time.
> > +  #
> >     #   BIT0 - Enable UEFI page guard.<BR>
> >     #   BIT1 - Enable UEFI pool guard.<BR>
> >     #   BIT2 - Enable SMM page guard.<BR>
> >     #   BIT3 - Enable SMM pool guard.<BR>
> > +  #   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory
> detection).<BR>
> >     #   BIT6 - Enable non-stop mode.<BR>
> >     #   BIT7 - The direction of Guard Page for Pool Guard.
> >     #          0 - The returned pool is near the tail guard page.<BR>
> > diff --git a/MdeModulePkg/MdeModulePkg.uni
> b/MdeModulePkg/MdeModulePkg.uni
> > index 9d2e473fa9..e72b893509 100644
> > --- a/MdeModulePkg/MdeModulePkg.uni
> > +++ b/MdeModulePkg/MdeModulePkg.uni
> > @@ -1227,11 +1227,13 @@
> >                                                                                               "Note that due to the limit
> of pool memory implementation and the alignment\n"
> >                                                                                               "requirement of UEFI spec,
> BIT7 is a try-best setting which cannot guarantee\n"
> >                                                                                               "that the returned pool is
> exactly adjacent to head guard page or tail guard\n"
> > -                                                                                            "page.\n"
> > +                                                                                            "page.\n\n"
> > +                                                                                            "Note that UEFI freed-
> memory guard and pool/page guard cannot be enabled at the same time.\n\n"
> >                                                                                               "   BIT0 - Enable UEFI page
> guard.<BR>\n"
> >                                                                                               "   BIT1 - Enable UEFI pool
> guard.<BR>\n"
> >                                                                                               "   BIT2 - Enable SMM page
> guard.<BR>\n"
> >                                                                                               "   BIT3 - Enable SMM pool
> guard.<BR>\n"
> > +                                                                                            "   BIT4 - Enable UEFI
> freed-memory guard (Use-After-Free memory detection).<BR>\n"
> >                                                                                               "   BIT7 - The direction of
> Guard Page for Pool Guard.\n"
> >                                                                                               "          0 - The returned
> pool is near the tail guard page.<BR>\n"
> >                                                                                               "          1 - The returned
> pool is near the head guard page.<BR>"
> >


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock
  2018-10-25  3:22   ` Zeng, Star
@ 2018-10-25  4:24     ` Wang, Jian J
  0 siblings, 0 replies; 16+ messages in thread
From: Wang, Jian J @ 2018-10-25  4:24 UTC (permalink / raw)
  To: Zeng, Star, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek

Sure. I'll change them. Thanks.

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Thursday, October 25, 2018 11:23 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH v3 5/6] MdeModulePkg/Core: prevent re-acquire
> GCD memory lock
> 
> Some minor comments are added below.
> With them addressed, Reviewed-by: Star Zeng <star.zeng@intel.com>.
> 
> On 2018/10/24 13:26, Jian J Wang wrote:
> >> v3 changes:
> >> a. drop the changes to CoreGetIoSpaceMap() because it won't cause
> >>     problem
> >> b. refine the logic in changes of CoreGetMemorySpaceMap()
> >>     and add more comments
> >
> > This issue is hidden in current code but exposed by introduction
> > of freed-memory guard feature due to the fact that the feature
> > will turn all pool allocation to page allocation.
> >
> > The solution is move the memory allocation in CoreGetMemorySpaceMap()
> 
> Use 'moving' instead of 'move'?
> 
> > and CoreGetIoSpaceMap() to be out of the GCD memory map lock.
> 
> Please remove CoreGetIoSpaceMap() as the code does not touch it at all.
> 
> >
> > Although it's rare, a while-loop is added to make sure that the count
> > of descriptor of memory map is the same after memory allocation, because
> > it's executed outside the GCD memory lock.
> >
> >     CoreDumpGcdMemorySpaceMap()
> > => CoreGetMemorySpaceMap()
> > => CoreAcquireGcdMemoryLock () *
> >     AllocatePool()
> > => InternalAllocatePool()
> > => CoreAllocatePool()
> > => CoreAllocatePoolI()
> > => CoreAllocatePoolPagesI()
> > => CoreAllocatePoolPages()
> > => FindFreePages()
> > => PromoteMemoryResource()
> > => CoreAcquireGcdMemoryLock()  **
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >   MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 80 +++++++++++++++++++++++++++-
> -------------
> >   1 file changed, 54 insertions(+), 26 deletions(-)
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > index d9c65a8045..bc02945721 100644
> > --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > @@ -1691,10 +1691,10 @@ CoreGetMemorySpaceMap (
> >     OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
> >     )
> >   {
> > -  EFI_STATUS                       Status;
> >     LIST_ENTRY                       *Link;
> >     EFI_GCD_MAP_ENTRY                *Entry;
> >     EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
> > +  UINTN                            DescriptorCount;
> >
> >     //
> >     // Make sure parameters are valid
> > @@ -1706,38 +1706,66 @@ CoreGetMemorySpaceMap (
> >       return EFI_INVALID_PARAMETER;
> >     }
> >
> > +  *NumberOfDescriptors  = 0;
> > +  *MemorySpaceMap       = NULL;
> > +
> >     CoreAcquireGcdMemoryLock ();
> 
> Better to add comment for it like below quoted from the example at
> https://lists.01.org/pipermail/edk2-devel/2018-October/031279.html given
> by Laszlo.
> 
> //
> // Take the lock, for entering the loop with the lock held.
> //
> 
> >
> > -  //
> > -  // Count the number of descriptors
> > -  //
> > -  *NumberOfDescriptors = CoreCountGcdMapEntry
> (&mGcdMemorySpaceMap);
> > +  while (TRUE) {
> > +    //
> > +    // Count the number of descriptors. It might be done more than once
> because
> 
> Use 'count' instead of 'number' to match the variable name?
> 
> > +    // there's code which has to be running outside the GCD lock.
> 
> Use "AllocatePool() calling code below" instead of "there's code"?
> 
> > +    //
> > +    DescriptorCount = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
> > +    if (DescriptorCount == *NumberOfDescriptors) {
> > +      //
> > +      // Fill in the MemorySpaceMap if no memory space map change.
> > +      //
> > +      Descriptor = *MemorySpaceMap;
> > +      Link = mGcdMemorySpaceMap.ForwardLink;
> > +      while (Link != &mGcdMemorySpaceMap) {
> > +        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link,
> EFI_GCD_MAP_SIGNATURE);
> > +        BuildMemoryDescriptor (Descriptor, Entry);
> > +        Descriptor++;
> > +        Link = Link->ForwardLink;
> > +      }
> >
> > -  //
> > -  // Allocate the MemorySpaceMap
> > -  //
> > -  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof
> (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
> > -  if (*MemorySpaceMap == NULL) {
> > -    Status = EFI_OUT_OF_RESOURCES;
> > -    goto Done;
> > -  }
> > +      break;
> > +    }
> >
> > -  //
> > -  // Fill in the MemorySpaceMap
> > -  //
> > -  Descriptor = *MemorySpaceMap;
> > -  Link = mGcdMemorySpaceMap.ForwardLink;
> > -  while (Link != &mGcdMemorySpaceMap) {
> > -    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
> > -    BuildMemoryDescriptor (Descriptor, Entry);
> > -    Descriptor++;
> > -    Link = Link->ForwardLink;
> > +    //
> > +    // Release the lock before memory allocation, because it might cause
> > +    // GCD lock conflict in one of calling path in AllocatPool().
> > +    //
> > +    CoreReleaseGcdMemoryLock ();
> > +
> > +    //
> > +    // Allocate memory to store the MemorySpaceMap. Note it might be
> already
> > +    // allocated if there's map descriptor change during memory allocation at
> > +    // last time.
> > +    //
> > +    if (*MemorySpaceMap != NULL) {
> > +      FreePool (*MemorySpaceMap);
> > +    }
> > +
> > +    *MemorySpaceMap = AllocatePool (DescriptorCount *
> > +                                    sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
> > +    if (*MemorySpaceMap == NULL) {
> > +      *NumberOfDescriptors = 0;
> > +      return EFI_OUT_OF_RESOURCES;
> > +    }
> > +
> > +    //
> > +    // Save the descriptor number got before for another round of check to
> make
> 
> Use 'count' instead of 'number' to match the variable name?
> 
> > +    // sure we won't miss any, since we have code running outside the GCD
> lock.
> > +    //
> > +    *NumberOfDescriptors = DescriptorCount;
> > +    CoreAcquireGcdMemoryLock ();
> 
> Better to add comment for it like below quoted from the example at
> https://lists.01.org/pipermail/edk2-devel/2018-October/031279.html given
> by Laszlo.
> 
> //
> // Re-acquire the lock, for the next iteration.
> //
> 
> >     }
> > -  Status = EFI_SUCCESS;
> >
> > -Done:
> >     CoreReleaseGcdMemoryLock ();
> 
> Better to add comment for it like below quoted from the example at
> https://lists.01.org/pipermail/edk2-devel/2018-October/031279.html given
> by Laszlo.
> 
> //
> // We exited the loop with the lock held, release it.
> //
> 
> 
> Thanks,
> Star
> 
> > -  return Status;
> > +
> > +  return EFI_SUCCESS;
> >   }
> >
> >
> >


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature
  2018-10-25  3:37   ` Zeng, Star
@ 2018-10-25  4:29     ` Wang, Jian J
  2018-10-25  6:28     ` Wang, Jian J
  1 sibling, 0 replies; 16+ messages in thread
From: Wang, Jian J @ 2018-10-25  4:29 UTC (permalink / raw)
  To: Zeng, Star, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek

Star,

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Thursday, October 25, 2018 11:37 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory
> guard feature
> 
> On 2018/10/24 13:26, Jian J Wang wrote:
> >> v3 changes:
> >> a. Merge from #4 and #5 of v2 patch
> >> b. Coding style cleanup
> >
> > Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
> > which is illegal access to memory which has been freed. The principle
> > behind is similar to heap guard feature, that is we'll turn all pool
> 
> Since we also regard UAF part of heap guard feature, better to use
> "pool/page heap guard feature" instead of "heap guard feature" here.
> 

You're right. I'll change it.

> I quoted a piece of code at below and have question that why it uses
> hard code Attribute parameter?
> 
> +      CoreAddRange (
> +        EfiConventionalMemory,
> +        StartAddress,
> +        EndAddress,
> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> EFI_MEMORY_WB
> +        );
> 

Because I don't know the actual attributes at that time so I think it'd be
safer to add all supported ones.

> Thanks,
> Star
> 
> > memory allocation to page allocation and mark them to be not-present
> > once they are freed. The freed memory block will not be added back into
> > memory pool.
> >
> > This means that, once a page is allocated and freed, it cannot be
> > re-allocated. This will bring an issue, which is that there's
> > risk that memory space will be used out. To address it, the memory
> > service add logic to put part (at most 64 pages a time) of freed pages
> > back into page pool, so that the memory service can still have memory
> > to allocate, when all memory space have been allocated once. This is
> > called memory promotion. The promoted pages are always from the eldest
> > pages which haven been freed.
> >
> > This feature brings another problem is that memory map descriptors will
> > be increased enormously (200+ -> 2000+). One of change in this patch
> > is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
> > freed pages back into the memory map. Now the number can stay at around
> > 510.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409
> +++++++++++++++++++++++++-
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
> >   MdeModulePkg/Core/Dxe/Mem/Page.c              |  41 ++-
> >   MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
> >   MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
> >   MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
> >   6 files changed, 524 insertions(+), 34 deletions(-)
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > index 663f969c0d..449a022658 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >   GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >                                       = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> >
> > +//
> > +// Used for promoting freed but not used pages.
> > +//
> > +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
> mLastPromotedPage = BASE_4GB;
> > +
> >   /**
> >     Set corresponding bits in bitmap table to 1 according to the address.
> >
> > @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
> >
> >     @return An integer containing the guarded memory bitmap.
> >   **/
> > -UINTN
> > +UINT64
> >   GetGuardedMemoryBits (
> >     IN EFI_PHYSICAL_ADDRESS    Address,
> >     IN UINTN                   NumberOfPages
> > @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
> >   {
> >     UINT64            *BitMap;
> >     UINTN             Bits;
> > -  UINTN             Result;
> > +  UINT64            Result;
> >     UINTN             Shift;
> >     UINTN             BitsToUnitEnd;
> >
> > @@ -660,15 +665,16 @@ IsPageTypeToGuard (
> >   /**
> >     Check to see if the heap guard is enabled for page and/or pool allocation.
> >
> > +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> > +
> >     @return TRUE/FALSE.
> >   **/
> >   BOOLEAN
> >   IsHeapGuardEnabled (
> > -  VOID
> > +  UINT8           GuardType
> >     )
> >   {
> > -  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> > -                              GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
> > +  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> GuardType);
> >   }
> >
> >   /**
> > @@ -1203,6 +1209,380 @@ SetAllGuardPages (
> >     }
> >   }
> >
> > +/**
> > +  Find the address of top-most guarded free page.
> > +
> > +  @param[out]  Address    Start address of top-most guarded free page.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +GetLastGuardedFreePageAddress (
> > +  OUT EFI_PHYSICAL_ADDRESS      *Address
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> > +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> > +  UINTN                   Level;
> > +  UINT64                  Map;
> > +  INTN                    Index;
> > +
> > +  ASSERT (mMapLevel >= 1);
> > +
> > +  BaseAddress = 0;
> > +  Map = mGuardedMemoryMap;
> > +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> > +       ++Level) {
> > +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> > +
> > +    //
> > +    // Find the non-NULL entry at largest index.
> > +    //
> > +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> > +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> > +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> > +        Map = ((UINT64 *)(UINTN)Map)[Index];
> > +        break;
> > +      }
> > +    }
> > +  }
> > +
> > +  //
> > +  // Find the non-zero MSB then get the page address.
> > +  //
> > +  while (Map != 0) {
> > +    Map = RShiftU64 (Map, 1);
> > +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> > +  }
> > +
> > +  *Address = BaseAddress;
> > +}
> > +
> > +/**
> > +  Record freed pages.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +MarkFreedPages (
> > +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> > +  IN UINTN                    Pages
> > +  )
> > +{
> > +  SetGuardedMemoryBits (BaseAddress, Pages);
> > +}
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  )
> > +{
> > +  EFI_STATUS      Status;
> > +
> > +  //
> > +  // Legacy memory lower than 1MB might be accessed with no allocation.
> Leave
> > +  // them alone.
> > +  //
> > +  if (BaseAddress < BASE_1MB) {
> > +    return;
> > +  }
> > +
> > +  MarkFreedPages (BaseAddress, Pages);
> > +  if (gCpu != NULL) {
> > +    //
> > +    // Set flag to make sure allocating memory without GUARD for page table
> > +    // operation; otherwise infinite loops could be caused.
> > +    //
> > +    mOnGuarding = TRUE;
> > +    //
> > +    // Note: This might overwrite other attributes needed by other features,
> > +    // such as NX memory protection.
> > +    //
> > +    Status = gCpu->SetMemoryAttributes (
> > +                     gCpu,
> > +                     BaseAddress,
> > +                     EFI_PAGES_TO_SIZE (Pages),
> > +                     EFI_MEMORY_RP
> > +                     );
> > +    //
> > +    // Normally we should ASSERT the returned Status. But there might be
> memory
> > +    // alloc/free involved in SetMemoryAttributes(), which might fail this
> > +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> > +    //
> > +    if (EFI_ERROR (Status)) {
> > +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%lu)\n",
> BaseAddress, (UINT64)Pages));
> > +    }
> > +    mOnGuarding = FALSE;
> > +  }
> > +}
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if enabled.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPagesChecked (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  )
> > +{
> > +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    GuardFreedPages (BaseAddress, Pages);
> > +  }
> > +}
> > +
> > +/**
> > +  Mark all pages freed before CPU Arch Protocol as not-present.
> > +
> > +**/
> > +VOID
> > +GuardAllFreedPages (
> > +  VOID
> > +  )
> > +{
> > +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    TableEntry;
> > +  UINT64    Address;
> > +  UINT64    GuardPage;
> > +  INTN      Level;
> > +  UINTN     BitIndex;
> > +  UINTN     GuardPageNumber;
> > +
> > +  if (mGuardedMemoryMap == 0 ||
> > +      mMapLevel == 0 ||
> > +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > +    return;
> > +  }
> > +
> > +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> > +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> > +
> > +  SetMem (Tables, sizeof(Tables), 0);
> > +  SetMem (Addresses, sizeof(Addresses), 0);
> > +  SetMem (Indices, sizeof(Indices), 0);
> > +
> > +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +  Tables[Level]   = mGuardedMemoryMap;
> > +  Address         = 0;
> > +  GuardPage       = (UINT64)-1;
> > +  GuardPageNumber = 0;
> > +
> > +  while (TRUE) {
> > +    if (Indices[Level] > Entries[Level]) {
> > +      Tables[Level] = 0;
> > +      Level        -= 1;
> > +    } else {
> > +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> > +      Address     = Addresses[Level];
> > +
> > +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +        Level            += 1;
> > +        Tables[Level]     = TableEntry;
> > +        Addresses[Level]  = Address;
> > +        Indices[Level]    = 0;
> > +
> > +        continue;
> > +      } else {
> > +        BitIndex = 1;
> > +        while (BitIndex != 0) {
> > +          if ((TableEntry & BitIndex) != 0) {
> > +            if (GuardPage == (UINT64)-1) {
> > +              GuardPage = Address;
> > +            }
> > +            ++GuardPageNumber;
> > +          } else if (GuardPageNumber > 0) {
> > +            GuardFreedPages (GuardPage, GuardPageNumber);
> > +            GuardPageNumber = 0;
> > +            GuardPage       = (UINT64)-1;
> > +          }
> > +
> > +          if (TableEntry == 0) {
> > +            break;
> > +          }
> > +
> > +          Address += EFI_PAGES_TO_SIZE (1);
> > +          BitIndex = LShiftU64 (BitIndex, 1);
> > +        }
> > +      }
> > +    }
> > +
> > +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > +      break;
> > +    }
> > +
> > +    Indices[Level] += 1;
> > +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > +    Addresses[Level] = Address | LShiftU64 (Indices[Level], Shifts[Level]);
> > +
> > +  }
> > +
> > +  //
> > +  // Update the maximum address of freed page which can be used for
> memory
> > +  // promotion upon out-of-memory-space.
> > +  //
> > +  GetLastGuardedFreePageAddress (&Address);
> > +  if (Address != 0) {
> > +    mLastPromotedPage = Address;
> > +  }
> > +}
> > +
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS        EndAddress;
> > +  UINT64                      Bitmap;
> > +  INTN                        Pages;
> > +
> > +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> > +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> > +    return;
> > +  }
> > +
> > +  Bitmap = 0;
> > +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
> >PhysicalStart);
> > +  Pages -= MemoryMapEntry->NumberOfPages;
> > +  while (Pages > 0) {
> > +    if (Bitmap == 0) {
> > +      EndAddress = MemoryMapEntry->PhysicalStart +
> > +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> > +      Bitmap = GetGuardedMemoryBits (EndAddress,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    if ((Bitmap & 1) == 0) {
> > +      break;
> > +    }
> > +
> > +    Pages--;
> > +    MemoryMapEntry->NumberOfPages++;
> > +    Bitmap = RShiftU64 (Bitmap, 1);
> > +  }
> > +}
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue,
> which
> > +  makes use of 'Used then throw away' way to detect any illegal access to
> freed
> > +  memory. The thrown-away memory will be marked as not-present so that
> any access
> > +  to those memory (after free) will be caught by page-fault exception.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  )
> > +{
> > +  EFI_STATUS              Status;
> > +  UINTN                   AvailablePages;
> > +  UINT64                  Bitmap;
> > +  EFI_PHYSICAL_ADDRESS    Start;
> > +
> > +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    return FALSE;
> > +  }
> > +
> > +  //
> > +  // Similar to memory allocation service, always search the freed pages in
> > +  // descending direction.
> > +  //
> > +  Start           = mLastPromotedPage;
> > +  AvailablePages  = 0;
> > +  while (AvailablePages == 0) {
> > +    Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    //
> > +    // If the address wraps around, try the really freed pages at top.
> > +    //
> > +    if (Start > mLastPromotedPage) {
> > +      GetLastGuardedFreePageAddress (&Start);
> > +      ASSERT (Start != 0);
> > +      Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    Bitmap = GetGuardedMemoryBits (Start,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    while (Bitmap > 0) {
> > +      if ((Bitmap & 1) != 0) {
> > +        ++AvailablePages;
> > +      } else if (AvailablePages == 0) {
> > +        Start += EFI_PAGES_TO_SIZE (1);
> > +      } else {
> > +        break;
> > +      }
> > +
> > +      Bitmap = RShiftU64 (Bitmap, 1);
> > +    }
> > +  }
> > +
> > +  if (AvailablePages) {
> > +    DEBUG ((DEBUG_INFO, "Promoted pages: %lX (%lx)\r\n", Start,
> (UINT64)AvailablePages));
> > +    ClearGuardedMemoryBits (Start, AvailablePages);
> > +
> > +    if (gCpu != NULL) {
> > +      //
> > +      // Set flag to make sure allocating memory without GUARD for page table
> > +      // operation; otherwise infinite loops could be caused.
> > +      //
> > +      mOnGuarding = TRUE;
> > +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
> EFI_PAGES_TO_SIZE(AvailablePages), 0);
> > +      ASSERT_EFI_ERROR (Status);
> > +      mOnGuarding = FALSE;
> > +    }
> > +
> > +    mLastPromotedPage = Start;
> > +    *StartAddress     = Start;
> > +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> > +    return TRUE;
> > +  }
> > +
> > +  return FALSE;
> > +}
> > +
> >   /**
> >     Notify function used to set all Guard pages before CPU Arch Protocol
> installed.
> >   **/
> > @@ -1212,7 +1592,20 @@ HeapGuardCpuArchProtocolNotify (
> >     )
> >   {
> >     ASSERT (gCpu != NULL);
> > -  SetAllGuardPages ();
> > +
> > +  if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL) &&
> > +      IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    DEBUG ((DEBUG_ERROR, "Heap guard and freed memory guard cannot be
> enabled at the same time.\n"));
> > +    CpuDeadLoop ();
> > +  }
> > +
> > +  if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> > +    SetAllGuardPages ();
> > +  }
> > +
> > +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    GuardAllFreedPages ();
> > +  }
> >   }
> >
> >   /**
> > @@ -1264,6 +1657,10 @@ DumpGuardedMemoryBitmap (
> >     CHAR8     *Ruler1;
> >     CHAR8     *Ruler2;
> >
> > +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_ALL)) {
> > +    return;
> > +  }
> > +
> >     if (mGuardedMemoryMap == 0 ||
> >         mMapLevel == 0 ||
> >         mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > index 8c34692439..55a91ec098 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > @@ -1,7 +1,7 @@
> >   /** @file
> >     Data type, macros and function prototypes of heap guard feature.
> >
> > -Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2017-2018, Intel Corporation. All rights reserved.<BR>
> >   This program and the accompanying materials
> >   are licensed and made available under the terms and conditions of the BSD
> License
> >   which accompanies this distribution.  The full text of the license may be found
> at
> > @@ -160,6 +160,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
> ANY KIND, EITHER EXPRESS OR IMPLIED.
> >   //
> >   #define GUARD_HEAP_TYPE_PAGE        BIT0
> >   #define GUARD_HEAP_TYPE_POOL        BIT1
> > +#define GUARD_HEAP_TYPE_FREED       BIT4
> > +#define GUARD_HEAP_TYPE_ALL         \
> > +
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_FR
> EED)
> >
> >   //
> >   // Debug message level
> > @@ -392,11 +395,13 @@ AdjustPoolHeadF (
> >   /**
> >     Check to see if the heap guard is enabled for page and/or pool allocation.
> >
> > +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> > +
> >     @return TRUE/FALSE.
> >   **/
> >   BOOLEAN
> >   IsHeapGuardEnabled (
> > -  VOID
> > +  UINT8           GuardType
> >     );
> >
> >   /**
> > @@ -407,6 +412,62 @@ HeapGuardCpuArchProtocolNotify (
> >     VOID
> >     );
> >
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  );
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if enabled.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPagesChecked (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  );
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue,
> which
> > +  makes use of 'Used then throw away' way to detect any illegal access to
> freed
> > +  memory. The thrown-away memory will be marked as not-present so that
> any access
> > +  to those memory (after free) will be caught by page-fault exception.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  );
> > +
> >   extern BOOLEAN mOnGuarding;
> >
> >   #endif
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
> b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > index 3b4cc08e7c..9d9abcf565 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > @@ -401,9 +401,11 @@ PromoteMemoryResource (
> >     VOID
> >     )
> >   {
> > -  LIST_ENTRY         *Link;
> > -  EFI_GCD_MAP_ENTRY  *Entry;
> > -  BOOLEAN            Promoted;
> > +  LIST_ENTRY            *Link;
> > +  EFI_GCD_MAP_ENTRY     *Entry;
> > +  BOOLEAN               Promoted;
> > +  EFI_PHYSICAL_ADDRESS  StartAddress;
> > +  EFI_PHYSICAL_ADDRESS  EndAddress;
> >
> >     DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
> >
> > @@ -451,6 +453,24 @@ PromoteMemoryResource (
> >
> >     CoreReleaseGcdMemoryLock ();
> >
> > +  if (!Promoted) {
> > +    //
> > +    // If freed-memory guard is enabled, we could promote pages from
> > +    // guarded free pages.
> > +    //
> > +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> > +    if (Promoted) {
> > +      CoreAcquireGcdMemoryLock ();
> > +      CoreAddRange (
> > +        EfiConventionalMemory,
> > +        StartAddress,
> > +        EndAddress,
> > +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> EFI_MEMORY_WB
> > +        );
> > +      CoreReleaseGcdMemoryLock ();
> > +    }
> > +  }
> > +
> >     return Promoted;
> >   }
> >   /**
> > @@ -896,9 +916,15 @@ CoreConvertPagesEx (
> >       }
> >
> >       //
> > -    // Add our new range in
> > +    // Add our new range in. Don't do this for freed pages if freed-memory
> > +    // guard is enabled.
> >       //
> > -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> > +        !ChangingType ||
> > +        MemType != EfiConventionalMemory) {
> > +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    }
> > +
> >       if (ChangingType && (MemType == EfiConventionalMemory)) {
> >         //
> >         // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
> this
> > @@ -1514,6 +1540,7 @@ CoreFreePages (
> >
> >     Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
> >     if (!EFI_ERROR (Status)) {
> > +    GuardFreedPagesChecked (Memory, NumberOfPages);
> >       CoreUpdateProfile (
> >         (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> >         MemoryProfileActionFreePages,
> > @@ -1908,9 +1935,7 @@ Done:
> >     *MemoryMapSize = BufferSize;
> >
> >     DEBUG_CODE (
> > -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> > -      DumpGuardedMemoryBitmap ();
> > -    }
> > +    DumpGuardedMemoryBitmap ();
> >     );
> >
> >     return Status;
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > index 1ff2061f7f..b9182ea807 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > @@ -1,7 +1,7 @@
> >   /** @file
> >     UEFI Memory pool management functions.
> >
> > -Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
> >   This program and the accompanying materials
> >   are licensed and made available under the terms and conditions of the BSD
> License
> >   which accompanies this distribution.  The full text of the license may be found
> at
> > @@ -26,7 +26,8 @@ typedef struct {
> >   } POOL_FREE;
> >
> >
> > -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> > +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> > +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
> >   typedef struct {
> >     UINT32          Signature;
> >     UINT32          Reserved;
> > @@ -367,6 +368,7 @@ CoreAllocatePoolI (
> >     UINTN       NoPages;
> >     UINTN       Granularity;
> >     BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >     ASSERT_LOCKED (&mPoolMemoryLock);
> >
> > @@ -386,6 +388,7 @@ CoreAllocatePoolI (
> >
> >     HasPoolTail  = !(NeedGuard &&
> >                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)
> && !mOnGuarding);
> >
> >     //
> >     // Adjusting the Size to be of proper alignment so that
> > @@ -406,7 +409,7 @@ CoreAllocatePoolI (
> >     // If allocation is over max size, just allocate pages for the request
> >     // (slow)
> >     //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
> >       if (!HasPoolTail) {
> >         Size -= sizeof (POOL_TAIL);
> >       }
> > @@ -498,7 +501,7 @@ Done:
> >       //
> >       // If we have a pool buffer, fill in the header & tail info
> >       //
> > -    Head->Signature = POOL_HEAD_SIGNATURE;
> > +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
> POOL_HEAD_SIGNATURE;
> >       Head->Size      = Size;
> >       Head->Type      = (EFI_MEMORY_TYPE) PoolType;
> >       Buffer          = Head->Data;
> > @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
> >     CoreFreePoolPages (Memory, NoPages);
> >     CoreReleaseMemoryLock ();
> >
> > +  GuardFreedPagesChecked (Memory, NoPages);
> >     ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
> >       (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
> >   }
> > @@ -685,15 +689,19 @@ CoreFreePoolI (
> >     UINTN       Granularity;
> >     BOOLEAN     IsGuarded;
> >     BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >     ASSERT(Buffer != NULL);
> >     //
> >     // Get the head & tail of the pool entry
> >     //
> > -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> > +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
> >     ASSERT(Head != NULL);
> >
> > -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> > +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> > +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> > +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> > +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >       return EFI_INVALID_PARAMETER;
> >     }
> >
> > @@ -701,6 +709,7 @@ CoreFreePoolI (
> >                   IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
> >     HasPoolTail = !(IsGuarded &&
> >                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >
> >     if (HasPoolTail) {
> >       Tail = HEAD_TO_TAIL (Head);
> > @@ -757,7 +766,7 @@ CoreFreePoolI (
> >     //
> >     // If it's not on the list, it must be pool pages
> >     //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
> >
> >       //
> >       // Return the memory pages back to free memory
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > index fa8f8fe91a..6298b67db1 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > @@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
> >     // Don't overwrite Guard pages, which should be the first and/or last page,
> >     // if any.
> >     //
> > -  if (IsHeapGuardEnabled ()) {
> > +  if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> >       if (IsGuardPage (Memory))  {
> >         Memory += EFI_PAGE_SIZE;
> >         Length -= EFI_PAGE_SIZE;
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > index 05eb4f422b..04cfb2dab2 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > @@ -1,7 +1,7 @@
> >   /** @file
> >     UEFI PropertiesTable support
> >
> > -Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR>
> >   This program and the accompanying materials
> >   are licensed and made available under the terms and conditions of the BSD
> License
> >   which accompanies this distribution.  The full text of the license may be found
> at
> > @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >   #include <Guid/PropertiesTable.h>
> >
> >   #include "DxeMain.h"
> > +#include "HeapGuard.h"
> >
> >   #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
> >     ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> > @@ -205,16 +206,13 @@ MergeMemoryMap (
> >       NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry,
> DescriptorSize);
> >
> >       do {
> > -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
> >NumberOfPages));
> > +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
> >PhysicalStart);
> > +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry-
> >NumberOfPages));
> >         if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> > -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> > -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        if (NewMemoryMapEntry != MemoryMapEntry) {
> > -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        }
> > -
> > +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
> &&
> > +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> >           NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> (NextMemoryMapEntry, DescriptorSize);
> >           continue;
> >         } else {
> >


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature
  2018-10-25  3:37   ` Zeng, Star
  2018-10-25  4:29     ` Wang, Jian J
@ 2018-10-25  6:28     ` Wang, Jian J
  1 sibling, 0 replies; 16+ messages in thread
From: Wang, Jian J @ 2018-10-25  6:28 UTC (permalink / raw)
  To: Zeng, Star, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek

Star,

I think the CoreGetMemorySpaceDescriptor() can get the memory capabilities. So
we can remove those hard-coded ones. In addition, since CoreAddRange() doesn't
touch mGcdMemorySpaceMap, CoreAcquireGcdMemoryLock and
CoreReleaseGcdMemoryLock are not necessary to protect CoreAddRange(). I'll
drop them as well.

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Thursday, October 25, 2018 11:37 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory
> guard feature
> 
> On 2018/10/24 13:26, Jian J Wang wrote:
> >> v3 changes:
> >> a. Merge from #4 and #5 of v2 patch
> >> b. Coding style cleanup
> >
> > Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
> > which is illegal access to memory which has been freed. The principle
> > behind is similar to heap guard feature, that is we'll turn all pool
> 
> Since we also regard UAF part of heap guard feature, better to use
> "pool/page heap guard feature" instead of "heap guard feature" here.
> 
> I quoted a piece of code at below and have question that why it uses
> hard code Attribute parameter?
> 
> +      CoreAddRange (
> +        EfiConventionalMemory,
> +        StartAddress,
> +        EndAddress,
> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> EFI_MEMORY_WB
> +        );
> 
> Thanks,
> Star
> 
> > memory allocation to page allocation and mark them to be not-present
> > once they are freed. The freed memory block will not be added back into
> > memory pool.
> >
> > This means that, once a page is allocated and freed, it cannot be
> > re-allocated. This will bring an issue, which is that there's
> > risk that memory space will be used out. To address it, the memory
> > service add logic to put part (at most 64 pages a time) of freed pages
> > back into page pool, so that the memory service can still have memory
> > to allocate, when all memory space have been allocated once. This is
> > called memory promotion. The promoted pages are always from the eldest
> > pages which haven been freed.
> >
> > This feature brings another problem is that memory map descriptors will
> > be increased enormously (200+ -> 2000+). One of change in this patch
> > is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
> > freed pages back into the memory map. Now the number can stay at around
> > 510.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409
> +++++++++++++++++++++++++-
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
> >   MdeModulePkg/Core/Dxe/Mem/Page.c              |  41 ++-
> >   MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
> >   MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
> >   MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
> >   6 files changed, 524 insertions(+), 34 deletions(-)
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > index 663f969c0d..449a022658 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >   GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >                                       = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> >
> > +//
> > +// Used for promoting freed but not used pages.
> > +//
> > +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
> mLastPromotedPage = BASE_4GB;
> > +
> >   /**
> >     Set corresponding bits in bitmap table to 1 according to the address.
> >
> > @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
> >
> >     @return An integer containing the guarded memory bitmap.
> >   **/
> > -UINTN
> > +UINT64
> >   GetGuardedMemoryBits (
> >     IN EFI_PHYSICAL_ADDRESS    Address,
> >     IN UINTN                   NumberOfPages
> > @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
> >   {
> >     UINT64            *BitMap;
> >     UINTN             Bits;
> > -  UINTN             Result;
> > +  UINT64            Result;
> >     UINTN             Shift;
> >     UINTN             BitsToUnitEnd;
> >
> > @@ -660,15 +665,16 @@ IsPageTypeToGuard (
> >   /**
> >     Check to see if the heap guard is enabled for page and/or pool allocation.
> >
> > +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> > +
> >     @return TRUE/FALSE.
> >   **/
> >   BOOLEAN
> >   IsHeapGuardEnabled (
> > -  VOID
> > +  UINT8           GuardType
> >     )
> >   {
> > -  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> > -                              GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
> > +  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> GuardType);
> >   }
> >
> >   /**
> > @@ -1203,6 +1209,380 @@ SetAllGuardPages (
> >     }
> >   }
> >
> > +/**
> > +  Find the address of top-most guarded free page.
> > +
> > +  @param[out]  Address    Start address of top-most guarded free page.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +GetLastGuardedFreePageAddress (
> > +  OUT EFI_PHYSICAL_ADDRESS      *Address
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> > +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> > +  UINTN                   Level;
> > +  UINT64                  Map;
> > +  INTN                    Index;
> > +
> > +  ASSERT (mMapLevel >= 1);
> > +
> > +  BaseAddress = 0;
> > +  Map = mGuardedMemoryMap;
> > +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> > +       ++Level) {
> > +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> > +
> > +    //
> > +    // Find the non-NULL entry at largest index.
> > +    //
> > +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> > +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> > +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> > +        Map = ((UINT64 *)(UINTN)Map)[Index];
> > +        break;
> > +      }
> > +    }
> > +  }
> > +
> > +  //
> > +  // Find the non-zero MSB then get the page address.
> > +  //
> > +  while (Map != 0) {
> > +    Map = RShiftU64 (Map, 1);
> > +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> > +  }
> > +
> > +  *Address = BaseAddress;
> > +}
> > +
> > +/**
> > +  Record freed pages.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +MarkFreedPages (
> > +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> > +  IN UINTN                    Pages
> > +  )
> > +{
> > +  SetGuardedMemoryBits (BaseAddress, Pages);
> > +}
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  )
> > +{
> > +  EFI_STATUS      Status;
> > +
> > +  //
> > +  // Legacy memory lower than 1MB might be accessed with no allocation.
> Leave
> > +  // them alone.
> > +  //
> > +  if (BaseAddress < BASE_1MB) {
> > +    return;
> > +  }
> > +
> > +  MarkFreedPages (BaseAddress, Pages);
> > +  if (gCpu != NULL) {
> > +    //
> > +    // Set flag to make sure allocating memory without GUARD for page table
> > +    // operation; otherwise infinite loops could be caused.
> > +    //
> > +    mOnGuarding = TRUE;
> > +    //
> > +    // Note: This might overwrite other attributes needed by other features,
> > +    // such as NX memory protection.
> > +    //
> > +    Status = gCpu->SetMemoryAttributes (
> > +                     gCpu,
> > +                     BaseAddress,
> > +                     EFI_PAGES_TO_SIZE (Pages),
> > +                     EFI_MEMORY_RP
> > +                     );
> > +    //
> > +    // Normally we should ASSERT the returned Status. But there might be
> memory
> > +    // alloc/free involved in SetMemoryAttributes(), which might fail this
> > +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> > +    //
> > +    if (EFI_ERROR (Status)) {
> > +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%lu)\n",
> BaseAddress, (UINT64)Pages));
> > +    }
> > +    mOnGuarding = FALSE;
> > +  }
> > +}
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if enabled.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPagesChecked (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  )
> > +{
> > +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    GuardFreedPages (BaseAddress, Pages);
> > +  }
> > +}
> > +
> > +/**
> > +  Mark all pages freed before CPU Arch Protocol as not-present.
> > +
> > +**/
> > +VOID
> > +GuardAllFreedPages (
> > +  VOID
> > +  )
> > +{
> > +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    TableEntry;
> > +  UINT64    Address;
> > +  UINT64    GuardPage;
> > +  INTN      Level;
> > +  UINTN     BitIndex;
> > +  UINTN     GuardPageNumber;
> > +
> > +  if (mGuardedMemoryMap == 0 ||
> > +      mMapLevel == 0 ||
> > +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > +    return;
> > +  }
> > +
> > +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> > +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> > +
> > +  SetMem (Tables, sizeof(Tables), 0);
> > +  SetMem (Addresses, sizeof(Addresses), 0);
> > +  SetMem (Indices, sizeof(Indices), 0);
> > +
> > +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +  Tables[Level]   = mGuardedMemoryMap;
> > +  Address         = 0;
> > +  GuardPage       = (UINT64)-1;
> > +  GuardPageNumber = 0;
> > +
> > +  while (TRUE) {
> > +    if (Indices[Level] > Entries[Level]) {
> > +      Tables[Level] = 0;
> > +      Level        -= 1;
> > +    } else {
> > +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> > +      Address     = Addresses[Level];
> > +
> > +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +        Level            += 1;
> > +        Tables[Level]     = TableEntry;
> > +        Addresses[Level]  = Address;
> > +        Indices[Level]    = 0;
> > +
> > +        continue;
> > +      } else {
> > +        BitIndex = 1;
> > +        while (BitIndex != 0) {
> > +          if ((TableEntry & BitIndex) != 0) {
> > +            if (GuardPage == (UINT64)-1) {
> > +              GuardPage = Address;
> > +            }
> > +            ++GuardPageNumber;
> > +          } else if (GuardPageNumber > 0) {
> > +            GuardFreedPages (GuardPage, GuardPageNumber);
> > +            GuardPageNumber = 0;
> > +            GuardPage       = (UINT64)-1;
> > +          }
> > +
> > +          if (TableEntry == 0) {
> > +            break;
> > +          }
> > +
> > +          Address += EFI_PAGES_TO_SIZE (1);
> > +          BitIndex = LShiftU64 (BitIndex, 1);
> > +        }
> > +      }
> > +    }
> > +
> > +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > +      break;
> > +    }
> > +
> > +    Indices[Level] += 1;
> > +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > +    Addresses[Level] = Address | LShiftU64 (Indices[Level], Shifts[Level]);
> > +
> > +  }
> > +
> > +  //
> > +  // Update the maximum address of freed page which can be used for
> memory
> > +  // promotion upon out-of-memory-space.
> > +  //
> > +  GetLastGuardedFreePageAddress (&Address);
> > +  if (Address != 0) {
> > +    mLastPromotedPage = Address;
> > +  }
> > +}
> > +
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS        EndAddress;
> > +  UINT64                      Bitmap;
> > +  INTN                        Pages;
> > +
> > +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> > +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> > +    return;
> > +  }
> > +
> > +  Bitmap = 0;
> > +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
> >PhysicalStart);
> > +  Pages -= MemoryMapEntry->NumberOfPages;
> > +  while (Pages > 0) {
> > +    if (Bitmap == 0) {
> > +      EndAddress = MemoryMapEntry->PhysicalStart +
> > +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> > +      Bitmap = GetGuardedMemoryBits (EndAddress,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    if ((Bitmap & 1) == 0) {
> > +      break;
> > +    }
> > +
> > +    Pages--;
> > +    MemoryMapEntry->NumberOfPages++;
> > +    Bitmap = RShiftU64 (Bitmap, 1);
> > +  }
> > +}
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue,
> which
> > +  makes use of 'Used then throw away' way to detect any illegal access to
> freed
> > +  memory. The thrown-away memory will be marked as not-present so that
> any access
> > +  to those memory (after free) will be caught by page-fault exception.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  )
> > +{
> > +  EFI_STATUS              Status;
> > +  UINTN                   AvailablePages;
> > +  UINT64                  Bitmap;
> > +  EFI_PHYSICAL_ADDRESS    Start;
> > +
> > +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    return FALSE;
> > +  }
> > +
> > +  //
> > +  // Similar to memory allocation service, always search the freed pages in
> > +  // descending direction.
> > +  //
> > +  Start           = mLastPromotedPage;
> > +  AvailablePages  = 0;
> > +  while (AvailablePages == 0) {
> > +    Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    //
> > +    // If the address wraps around, try the really freed pages at top.
> > +    //
> > +    if (Start > mLastPromotedPage) {
> > +      GetLastGuardedFreePageAddress (&Start);
> > +      ASSERT (Start != 0);
> > +      Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    Bitmap = GetGuardedMemoryBits (Start,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    while (Bitmap > 0) {
> > +      if ((Bitmap & 1) != 0) {
> > +        ++AvailablePages;
> > +      } else if (AvailablePages == 0) {
> > +        Start += EFI_PAGES_TO_SIZE (1);
> > +      } else {
> > +        break;
> > +      }
> > +
> > +      Bitmap = RShiftU64 (Bitmap, 1);
> > +    }
> > +  }
> > +
> > +  if (AvailablePages) {
> > +    DEBUG ((DEBUG_INFO, "Promoted pages: %lX (%lx)\r\n", Start,
> (UINT64)AvailablePages));
> > +    ClearGuardedMemoryBits (Start, AvailablePages);
> > +
> > +    if (gCpu != NULL) {
> > +      //
> > +      // Set flag to make sure allocating memory without GUARD for page table
> > +      // operation; otherwise infinite loops could be caused.
> > +      //
> > +      mOnGuarding = TRUE;
> > +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
> EFI_PAGES_TO_SIZE(AvailablePages), 0);
> > +      ASSERT_EFI_ERROR (Status);
> > +      mOnGuarding = FALSE;
> > +    }
> > +
> > +    mLastPromotedPage = Start;
> > +    *StartAddress     = Start;
> > +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> > +    return TRUE;
> > +  }
> > +
> > +  return FALSE;
> > +}
> > +
> >   /**
> >     Notify function used to set all Guard pages before CPU Arch Protocol
> installed.
> >   **/
> > @@ -1212,7 +1592,20 @@ HeapGuardCpuArchProtocolNotify (
> >     )
> >   {
> >     ASSERT (gCpu != NULL);
> > -  SetAllGuardPages ();
> > +
> > +  if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL) &&
> > +      IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    DEBUG ((DEBUG_ERROR, "Heap guard and freed memory guard cannot be
> enabled at the same time.\n"));
> > +    CpuDeadLoop ();
> > +  }
> > +
> > +  if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> > +    SetAllGuardPages ();
> > +  }
> > +
> > +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    GuardAllFreedPages ();
> > +  }
> >   }
> >
> >   /**
> > @@ -1264,6 +1657,10 @@ DumpGuardedMemoryBitmap (
> >     CHAR8     *Ruler1;
> >     CHAR8     *Ruler2;
> >
> > +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_ALL)) {
> > +    return;
> > +  }
> > +
> >     if (mGuardedMemoryMap == 0 ||
> >         mMapLevel == 0 ||
> >         mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > index 8c34692439..55a91ec098 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > @@ -1,7 +1,7 @@
> >   /** @file
> >     Data type, macros and function prototypes of heap guard feature.
> >
> > -Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2017-2018, Intel Corporation. All rights reserved.<BR>
> >   This program and the accompanying materials
> >   are licensed and made available under the terms and conditions of the BSD
> License
> >   which accompanies this distribution.  The full text of the license may be found
> at
> > @@ -160,6 +160,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
> ANY KIND, EITHER EXPRESS OR IMPLIED.
> >   //
> >   #define GUARD_HEAP_TYPE_PAGE        BIT0
> >   #define GUARD_HEAP_TYPE_POOL        BIT1
> > +#define GUARD_HEAP_TYPE_FREED       BIT4
> > +#define GUARD_HEAP_TYPE_ALL         \
> > +
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_FR
> EED)
> >
> >   //
> >   // Debug message level
> > @@ -392,11 +395,13 @@ AdjustPoolHeadF (
> >   /**
> >     Check to see if the heap guard is enabled for page and/or pool allocation.
> >
> > +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> > +
> >     @return TRUE/FALSE.
> >   **/
> >   BOOLEAN
> >   IsHeapGuardEnabled (
> > -  VOID
> > +  UINT8           GuardType
> >     );
> >
> >   /**
> > @@ -407,6 +412,62 @@ HeapGuardCpuArchProtocolNotify (
> >     VOID
> >     );
> >
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  );
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if enabled.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPagesChecked (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  );
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue,
> which
> > +  makes use of 'Used then throw away' way to detect any illegal access to
> freed
> > +  memory. The thrown-away memory will be marked as not-present so that
> any access
> > +  to those memory (after free) will be caught by page-fault exception.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  );
> > +
> >   extern BOOLEAN mOnGuarding;
> >
> >   #endif
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
> b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > index 3b4cc08e7c..9d9abcf565 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > @@ -401,9 +401,11 @@ PromoteMemoryResource (
> >     VOID
> >     )
> >   {
> > -  LIST_ENTRY         *Link;
> > -  EFI_GCD_MAP_ENTRY  *Entry;
> > -  BOOLEAN            Promoted;
> > +  LIST_ENTRY            *Link;
> > +  EFI_GCD_MAP_ENTRY     *Entry;
> > +  BOOLEAN               Promoted;
> > +  EFI_PHYSICAL_ADDRESS  StartAddress;
> > +  EFI_PHYSICAL_ADDRESS  EndAddress;
> >
> >     DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
> >
> > @@ -451,6 +453,24 @@ PromoteMemoryResource (
> >
> >     CoreReleaseGcdMemoryLock ();
> >
> > +  if (!Promoted) {
> > +    //
> > +    // If freed-memory guard is enabled, we could promote pages from
> > +    // guarded free pages.
> > +    //
> > +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> > +    if (Promoted) {
> > +      CoreAcquireGcdMemoryLock ();
> > +      CoreAddRange (
> > +        EfiConventionalMemory,
> > +        StartAddress,
> > +        EndAddress,
> > +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> EFI_MEMORY_WB
> > +        );
> > +      CoreReleaseGcdMemoryLock ();
> > +    }
> > +  }
> > +
> >     return Promoted;
> >   }
> >   /**
> > @@ -896,9 +916,15 @@ CoreConvertPagesEx (
> >       }
> >
> >       //
> > -    // Add our new range in
> > +    // Add our new range in. Don't do this for freed pages if freed-memory
> > +    // guard is enabled.
> >       //
> > -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> > +        !ChangingType ||
> > +        MemType != EfiConventionalMemory) {
> > +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    }
> > +
> >       if (ChangingType && (MemType == EfiConventionalMemory)) {
> >         //
> >         // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
> this
> > @@ -1514,6 +1540,7 @@ CoreFreePages (
> >
> >     Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
> >     if (!EFI_ERROR (Status)) {
> > +    GuardFreedPagesChecked (Memory, NumberOfPages);
> >       CoreUpdateProfile (
> >         (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> >         MemoryProfileActionFreePages,
> > @@ -1908,9 +1935,7 @@ Done:
> >     *MemoryMapSize = BufferSize;
> >
> >     DEBUG_CODE (
> > -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> > -      DumpGuardedMemoryBitmap ();
> > -    }
> > +    DumpGuardedMemoryBitmap ();
> >     );
> >
> >     return Status;
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > index 1ff2061f7f..b9182ea807 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > @@ -1,7 +1,7 @@
> >   /** @file
> >     UEFI Memory pool management functions.
> >
> > -Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
> >   This program and the accompanying materials
> >   are licensed and made available under the terms and conditions of the BSD
> License
> >   which accompanies this distribution.  The full text of the license may be found
> at
> > @@ -26,7 +26,8 @@ typedef struct {
> >   } POOL_FREE;
> >
> >
> > -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> > +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> > +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
> >   typedef struct {
> >     UINT32          Signature;
> >     UINT32          Reserved;
> > @@ -367,6 +368,7 @@ CoreAllocatePoolI (
> >     UINTN       NoPages;
> >     UINTN       Granularity;
> >     BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >     ASSERT_LOCKED (&mPoolMemoryLock);
> >
> > @@ -386,6 +388,7 @@ CoreAllocatePoolI (
> >
> >     HasPoolTail  = !(NeedGuard &&
> >                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)
> && !mOnGuarding);
> >
> >     //
> >     // Adjusting the Size to be of proper alignment so that
> > @@ -406,7 +409,7 @@ CoreAllocatePoolI (
> >     // If allocation is over max size, just allocate pages for the request
> >     // (slow)
> >     //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
> >       if (!HasPoolTail) {
> >         Size -= sizeof (POOL_TAIL);
> >       }
> > @@ -498,7 +501,7 @@ Done:
> >       //
> >       // If we have a pool buffer, fill in the header & tail info
> >       //
> > -    Head->Signature = POOL_HEAD_SIGNATURE;
> > +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
> POOL_HEAD_SIGNATURE;
> >       Head->Size      = Size;
> >       Head->Type      = (EFI_MEMORY_TYPE) PoolType;
> >       Buffer          = Head->Data;
> > @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
> >     CoreFreePoolPages (Memory, NoPages);
> >     CoreReleaseMemoryLock ();
> >
> > +  GuardFreedPagesChecked (Memory, NoPages);
> >     ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
> >       (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
> >   }
> > @@ -685,15 +689,19 @@ CoreFreePoolI (
> >     UINTN       Granularity;
> >     BOOLEAN     IsGuarded;
> >     BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >     ASSERT(Buffer != NULL);
> >     //
> >     // Get the head & tail of the pool entry
> >     //
> > -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> > +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
> >     ASSERT(Head != NULL);
> >
> > -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> > +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> > +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> > +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> > +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >       return EFI_INVALID_PARAMETER;
> >     }
> >
> > @@ -701,6 +709,7 @@ CoreFreePoolI (
> >                   IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
> >     HasPoolTail = !(IsGuarded &&
> >                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >
> >     if (HasPoolTail) {
> >       Tail = HEAD_TO_TAIL (Head);
> > @@ -757,7 +766,7 @@ CoreFreePoolI (
> >     //
> >     // If it's not on the list, it must be pool pages
> >     //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
> >
> >       //
> >       // Return the memory pages back to free memory
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > index fa8f8fe91a..6298b67db1 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > @@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
> >     // Don't overwrite Guard pages, which should be the first and/or last page,
> >     // if any.
> >     //
> > -  if (IsHeapGuardEnabled ()) {
> > +  if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> >       if (IsGuardPage (Memory))  {
> >         Memory += EFI_PAGE_SIZE;
> >         Length -= EFI_PAGE_SIZE;
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > index 05eb4f422b..04cfb2dab2 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > @@ -1,7 +1,7 @@
> >   /** @file
> >     UEFI PropertiesTable support
> >
> > -Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR>
> >   This program and the accompanying materials
> >   are licensed and made available under the terms and conditions of the BSD
> License
> >   which accompanies this distribution.  The full text of the license may be found
> at
> > @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >   #include <Guid/PropertiesTable.h>
> >
> >   #include "DxeMain.h"
> > +#include "HeapGuard.h"
> >
> >   #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
> >     ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> > @@ -205,16 +206,13 @@ MergeMemoryMap (
> >       NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry,
> DescriptorSize);
> >
> >       do {
> > -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
> >NumberOfPages));
> > +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
> >PhysicalStart);
> > +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry-
> >NumberOfPages));
> >         if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> > -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> > -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        if (NewMemoryMapEntry != MemoryMapEntry) {
> > -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        }
> > -
> > +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
> &&
> > +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> >           NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> (NextMemoryMapEntry, DescriptorSize);
> >           continue;
> >         } else {
> >


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-10-25  6:28 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-10-24  5:26 [PATCH v3 0/6] Introduce freed-memory guard feature Jian J Wang
2018-10-24  5:26 ` [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
2018-10-25  2:55   ` Zeng, Star
2018-10-25  4:21     ` Wang, Jian J
2018-10-24  5:26 ` [PATCH v3 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD Jian J Wang
2018-10-25  3:02   ` Zeng, Star
2018-10-25  4:23     ` Wang, Jian J
2018-10-24  5:26 ` [PATCH v3 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode Jian J Wang
2018-10-24  5:26 ` [PATCH v3 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool Jian J Wang
2018-10-24  5:26 ` [PATCH v3 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock Jian J Wang
2018-10-25  3:22   ` Zeng, Star
2018-10-25  4:24     ` Wang, Jian J
2018-10-24  5:26 ` [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
2018-10-25  3:37   ` Zeng, Star
2018-10-25  4:29     ` Wang, Jian J
2018-10-25  6:28     ` Wang, Jian J

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox