public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [PATCH v4 0/6] Introduce freed-memory guard feature
@ 2018-10-25  7:17 Jian J Wang
  2018-10-25  7:18 ` [PATCH v4 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Jian J Wang @ 2018-10-25  7:17 UTC (permalink / raw)
  To: edk2-devel

> v4 changes:
> Updated per comments from Star. Please refer to individual patch
> file for details (#2/5/6)

Freed-memory guard is a new feauture used to detect UAF (Use-After-Free)
memory issue.

Tests:
a. Feature basic unit/functionality test
b. OVMF regression test

Jian J Wang (6):
  MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation
  MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD
  UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode
  UefiCpuPkg/CpuDxe: prevent recursive calling of
    InitializePageTablePool
  MdeModulePkg/Core: prevent re-acquire GCD memory lock
  MdeModulePkg/Core: add freed-memory guard feature

 MdeModulePkg/Core/Dxe/Gcd/Gcd.c               |  87 ++++--
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409 +++++++++++++++++++++++++-
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
 MdeModulePkg/Core/Dxe/Mem/Page.c              |  42 ++-
 MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
 MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
 MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
 MdeModulePkg/MdeModulePkg.dec                 |  20 +-
 MdeModulePkg/MdeModulePkg.uni                 |  16 +-
 UefiCpuPkg/CpuDxe/CpuDxe.h                    |   2 +-
 UefiCpuPkg/CpuDxe/CpuPageTable.c              |  23 +-
 11 files changed, 637 insertions(+), 70 deletions(-)

-- 
2.16.2.windows.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation
  2018-10-25  7:17 [PATCH v4 0/6] Introduce freed-memory guard feature Jian J Wang
@ 2018-10-25  7:18 ` Jian J Wang
  2018-10-25  7:18 ` [PATCH v4 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD Jian J Wang
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Jian J Wang @ 2018-10-25  7:18 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v4 changes: none

This cleanup is meant for avoiding misuse of newly introduced BIT4
(UAF detection) of PCD PcdHeapGuardPropertyMask, because it applies
to all types of physical memory. In another words,
PcdHeapGuardPoolType and PcdHeapGuardPageType don't have effect to
the BIT4 of PcdHeapGuardPropertyMask.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/MdeModulePkg.dec | 4 ++++
 MdeModulePkg/MdeModulePkg.uni | 2 ++
 2 files changed, 6 insertions(+)

diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
index 6037504fa7..2009dbc5fd 100644
--- a/MdeModulePkg/MdeModulePkg.dec
+++ b/MdeModulePkg/MdeModulePkg.dec
@@ -955,6 +955,8 @@
   # free pages for all of them. The page allocation for the type related to
   # cleared bits keeps the same as ususal.
   #
+  # This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.
+  #
   # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
   #  EfiReservedMemoryType             0x0000000000000001<BR>
   #  EfiLoaderCode                     0x0000000000000002<BR>
@@ -984,6 +986,8 @@
   # if there's enough free memory for all of them. The pool allocation for the
   # type related to cleared bits keeps the same as ususal.
   #
+  # This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.
+  #
   # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
   #  EfiReservedMemoryType             0x0000000000000001<BR>
   #  EfiLoaderCode                     0x0000000000000002<BR>
diff --git a/MdeModulePkg/MdeModulePkg.uni b/MdeModulePkg/MdeModulePkg.uni
index a6bcb627cf..9d2e473fa9 100644
--- a/MdeModulePkg/MdeModulePkg.uni
+++ b/MdeModulePkg/MdeModulePkg.uni
@@ -1171,6 +1171,7 @@
                                                                                         " before and after corresponding type of pages allocated if there's enough\n"
                                                                                         " free pages for all of them. The page allocation for the type related to\n"
                                                                                         " cleared bits keeps the same as ususal.\n\n"
+                                                                                        " This PCD is only valid if BIT0 and/or BIT2 are set in PcdHeapGuardPropertyMask.\n\n"
                                                                                         " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
                                                                                         "  EfiReservedMemoryType             0x0000000000000001\n"
                                                                                         "  EfiLoaderCode                     0x0000000000000002\n"
@@ -1198,6 +1199,7 @@
                                                                                         " before and after corresponding type of pages which the allocated pool occupies,\n"
                                                                                         " if there's enough free memory for all of them. The pool allocation for the\n"
                                                                                         " type related to cleared bits keeps the same as ususal.\n\n"
+                                                                                        " This PCD is only valid if BIT1 and/or BIT3 are set in PcdHeapGuardPropertyMask.\n\n"
                                                                                         " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
                                                                                         "  EfiReservedMemoryType             0x0000000000000001\n"
                                                                                         "  EfiLoaderCode                     0x0000000000000002\n"
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD
  2018-10-25  7:17 [PATCH v4 0/6] Introduce freed-memory guard feature Jian J Wang
  2018-10-25  7:18 ` [PATCH v4 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
@ 2018-10-25  7:18 ` Jian J Wang
  2018-10-25  7:18 ` [PATCH v4 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode Jian J Wang
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Jian J Wang @ 2018-10-25  7:18 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v4 changes:
> a. refine PCD description of PcdHeapGuardPropertyMask

UAF (Use-After-Free) memory issue is kind of illegal access to memory
which has been freed. It can be detected by a new freed-memory guard
enforced onto freed memory.

BIT4 of following PCD is used to enable the freed-memory guard feature.

  gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask

Please note this feature is for debug purpose and should not be enabled
in product BIOS, and cannot be enabled with pool/page heap guard at the
same time. It's disabled by default.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/MdeModulePkg.dec | 16 ++++++++++++----
 MdeModulePkg/MdeModulePkg.uni | 14 ++++++++++----
 2 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
index 2009dbc5fd..428eeeb670 100644
--- a/MdeModulePkg/MdeModulePkg.dec
+++ b/MdeModulePkg/MdeModulePkg.dec
@@ -1011,14 +1011,22 @@
   gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType|0x0|UINT64|0x30001053
 
   ## This mask is to control Heap Guard behavior.
-  # Note that due to the limit of pool memory implementation and the alignment
-  # requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee
-  # that the returned pool is exactly adjacent to head guard page or tail guard
-  # page.
+  #
+  # Note:
+  #   a) Heap Guard is for debug purpose and should not be enabled in product
+  #      BIOS.
+  #   b) Due to the limit of pool memory implementation and the alignment
+  #      requirement of UEFI spec, BIT7 is a try-best setting which cannot
+  #      guarantee that the returned pool is exactly adjacent to head guard
+  #      page or tail guard page.
+  #   c) UEFI freed-memory guard and UEFI pool/page guard cannot be enabled
+  #      at the same time.
+  #
   #   BIT0 - Enable UEFI page guard.<BR>
   #   BIT1 - Enable UEFI pool guard.<BR>
   #   BIT2 - Enable SMM page guard.<BR>
   #   BIT3 - Enable SMM pool guard.<BR>
+  #   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>
   #   BIT6 - Enable non-stop mode.<BR>
   #   BIT7 - The direction of Guard Page for Pool Guard.
   #          0 - The returned pool is near the tail guard page.<BR>
diff --git a/MdeModulePkg/MdeModulePkg.uni b/MdeModulePkg/MdeModulePkg.uni
index 9d2e473fa9..5fa7a6ae30 100644
--- a/MdeModulePkg/MdeModulePkg.uni
+++ b/MdeModulePkg/MdeModulePkg.uni
@@ -1224,14 +1224,20 @@
 #string STR_gEfiMdeModulePkgTokenSpaceGuid_PcdHeapGuardPropertyMask_PROMPT  #language en-US "The Heap Guard feature mask"
 
 #string STR_gEfiMdeModulePkgTokenSpaceGuid_PcdHeapGuardPropertyMask_HELP    #language en-US "This mask is to control Heap Guard behavior.\n"
-                                                                                            "Note that due to the limit of pool memory implementation and the alignment\n"
-                                                                                            "requirement of UEFI spec, BIT7 is a try-best setting which cannot guarantee\n"
-                                                                                            "that the returned pool is exactly adjacent to head guard page or tail guard\n"
-                                                                                            "page.\n"
+                                                                                            " Note:\n"
+                                                                                            "   a) Heap Guard is for debug purpose and should not be enabled in product"
+                                                                                            "      BIOS.\n"
+                                                                                            "   b) Due to the limit of pool memory implementation and the alignment"
+                                                                                            "      requirement of UEFI spec, BIT7 is a try-best setting which cannot"
+                                                                                            "      guarantee that the returned pool is exactly adjacent to head guard"
+                                                                                            "      page or tail guard page.\n"
+                                                                                            "   c) UEFI freed-memory guard and UEFI pool/page guard cannot be enabled"
+                                                                                            "      at the same time.\n"
                                                                                             "   BIT0 - Enable UEFI page guard.<BR>\n"
                                                                                             "   BIT1 - Enable UEFI pool guard.<BR>\n"
                                                                                             "   BIT2 - Enable SMM page guard.<BR>\n"
                                                                                             "   BIT3 - Enable SMM pool guard.<BR>\n"
+                                                                                            "   BIT4 - Enable UEFI freed-memory guard (Use-After-Free memory detection).<BR>\n"
                                                                                             "   BIT7 - The direction of Guard Page for Pool Guard.\n"
                                                                                             "          0 - The returned pool is near the tail guard page.<BR>\n"
                                                                                             "          1 - The returned pool is near the head guard page.<BR>"
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode
  2018-10-25  7:17 [PATCH v4 0/6] Introduce freed-memory guard feature Jian J Wang
  2018-10-25  7:18 ` [PATCH v4 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
  2018-10-25  7:18 ` [PATCH v4 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD Jian J Wang
@ 2018-10-25  7:18 ` Jian J Wang
  2018-10-26  0:38   ` Dong, Eric
  2018-10-25  7:18 ` [PATCH v4 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool Jian J Wang
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Jian J Wang @ 2018-10-25  7:18 UTC (permalink / raw)
  To: edk2-devel
  Cc: Eric Dong, Laszlo Ersek, Star Zeng, Michael D Kinney, Jiewen Yao,
	Ruiyu Ni

> v4 changes: none

Non-stop mode was introduced / explained in commit 8f2613628acf
("MdeModulePkg/MdeModulePkg.dec: add new settings for PCDs",
2018-08-30).

The macro HEAP_GUARD_NONSTOP_MODE was added to CpuDxe in commit
dcc026217fdc ("UefiCpuPkg/CpuDxe: implement non-stop mode for uefi",
2018-08-30).

Another instance of the macro HEAP_GUARD_NONSTOP_MODE was added to
PiSmmCpuDxeSmm -- with BIT1|BIT0 replaced with BIT3|BIT2 -- in commit
09afd9a42a7f ("UefiCpuPkg/PiSmmCpuDxeSmm: implement non-stop mode for
SMM", 2018-08-30)

Since the freed-memory guard is for UEFI-only. This patch only updates
HEAP_GUARD_NONSTOP_MODE in "UefiCpuPkg/CpuDxe/CpuDxe.h" (add BIT4).

Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 UefiCpuPkg/CpuDxe/CpuDxe.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/UefiCpuPkg/CpuDxe/CpuDxe.h b/UefiCpuPkg/CpuDxe/CpuDxe.h
index 064ea05bba..3183a3f7f4 100644
--- a/UefiCpuPkg/CpuDxe/CpuDxe.h
+++ b/UefiCpuPkg/CpuDxe/CpuDxe.h
@@ -58,7 +58,7 @@
                                        )
 
 #define HEAP_GUARD_NONSTOP_MODE       \
-        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT1|BIT0)) > BIT6)
+        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT4|BIT1|BIT0)) > BIT6)
 
 #define NULL_DETECTION_NONSTOP_MODE   \
         ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & (BIT6|BIT0)) > BIT6)
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool
  2018-10-25  7:17 [PATCH v4 0/6] Introduce freed-memory guard feature Jian J Wang
                   ` (2 preceding siblings ...)
  2018-10-25  7:18 ` [PATCH v4 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode Jian J Wang
@ 2018-10-25  7:18 ` Jian J Wang
  2018-10-26  0:38   ` Dong, Eric
  2018-10-25  7:18 ` [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock Jian J Wang
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Jian J Wang @ 2018-10-25  7:18 UTC (permalink / raw)
  To: edk2-devel
  Cc: Eric Dong, Laszlo Ersek, Star Zeng, Michael D Kinney, Jiewen Yao,
	Ruiyu Ni

> v4 changes: none

The freed-memory guard feature will cause an recursive calling
of InitializePageTablePool(). This is due to a fact that
AllocateAlignedPages() is used to allocate page table pool memory.
This function will most likely call gBS->FreePages to free unaligned
pages and then cause another round of page attributes change, like
below (freed pages will be always marked not-present if freed-memory
guard is enabled)

   FreePages() <===============|
=> CpuSetMemoryAttributes()    |
=> <if out of page table>      |
=> InitializePageTablePool()   |
=> AllocateAlignedPages()      |
=> FreePages() ================|

The solution is add a global variable as a lock in page table pool
allocation function and fail any other requests if it has not been
done.

Since this issue will only happen if free-memory guard is enabled,
it won't affect CpuSetMemoryAttributes() in default build of a BIOS.

If free-memory guard is enabled, it only affect the pages
(failed to mark them as not-present) freed in AllocateAlignedPages().

Since those freed pages haven't been used yet (their addresses not
yet exposed to code outside AllocateAlignedPages), it won't compromise
the freed-memory guard feature.

This change will just fail the CpuSetMemoryAttributes() called from
FreePages() but it won't fail the FreePages(). So the error status
won't be propagated upper layer of code.

Cc: Eric Dong <eric.dong@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 UefiCpuPkg/CpuDxe/CpuPageTable.c | 23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTable.c
index 33e8ee2d2c..4bee8c7772 100644
--- a/UefiCpuPkg/CpuDxe/CpuPageTable.c
+++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c
@@ -100,6 +100,7 @@ PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
 };
 
 PAGE_TABLE_POOL                   *mPageTablePool = NULL;
+BOOLEAN                           mPageTablePoolLock = FALSE;
 PAGE_TABLE_LIB_PAGING_CONTEXT     mPagingContext;
 EFI_SMM_BASE2_PROTOCOL            *mSmmBase2 = NULL;
 
@@ -1046,6 +1047,16 @@ InitializePageTablePool (
   VOID                      *Buffer;
   BOOLEAN                   IsModified;
 
+  //
+  // Do not allow re-entrance.
+  //
+  if (mPageTablePoolLock) {
+    return FALSE;
+  }
+
+  mPageTablePoolLock = TRUE;
+  IsModified = FALSE;
+
   //
   // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one page for
   // header.
@@ -1056,9 +1067,15 @@ InitializePageTablePool (
   Buffer = AllocateAlignedPages (PoolPages, PAGE_TABLE_POOL_ALIGNMENT);
   if (Buffer == NULL) {
     DEBUG ((DEBUG_ERROR, "ERROR: Out of aligned pages\r\n"));
-    return FALSE;
+    goto Done;
   }
 
+  DEBUG ((
+    DEBUG_INFO,
+    "Paging: added %lu pages to page table pool\r\n",
+    (UINT64)PoolPages
+    ));
+
   //
   // Link all pools into a list for easier track later.
   //
@@ -1092,7 +1109,9 @@ InitializePageTablePool (
     );
   ASSERT (IsModified == TRUE);
 
-  return TRUE;
+Done:
+  mPageTablePoolLock = FALSE;
+  return IsModified;
 }
 
 /**
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock
  2018-10-25  7:17 [PATCH v4 0/6] Introduce freed-memory guard feature Jian J Wang
                   ` (3 preceding siblings ...)
  2018-10-25  7:18 ` [PATCH v4 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool Jian J Wang
@ 2018-10-25  7:18 ` Jian J Wang
  2018-10-25  7:20   ` Wang, Jian J
  2018-10-25  7:18 ` [PATCH v4 6/6] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
  2018-10-26  1:14 ` [PATCH v4 0/6] Introduce " Zeng, Star
  6 siblings, 1 reply; 15+ messages in thread
From: Jian J Wang @ 2018-10-25  7:18 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v4 changes:
> a. add more comments from Laszlo

This issue is hidden in current code but exposed by introduction
of freed-memory guard feature due to the fact that the feature
will turn all pool allocation to page allocation.

The solution is move the memory allocation in CoreGetMemorySpaceMap()
and CoreGetIoSpaceMap() to be out of the GCD memory map lock.

   CoreDumpGcdMemorySpaceMap()
=> CoreGetMemorySpaceMap()
=> CoreAcquireGcdMemoryLock () *
   AllocatePool()
=> InternalAllocatePool()
=> CoreAllocatePool()
=> CoreAllocatePoolI()
=> CoreAllocatePoolPagesI()
=> CoreAllocatePoolPages()
=> FindFreePages()
=> PromoteMemoryResource()
=> CoreAcquireGcdMemoryLock()  **

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 87 +++++++++++++++++++++++++++++------------
 1 file changed, 62 insertions(+), 25 deletions(-)

diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
index d9c65a8045..f99d6bb933 100644
--- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
+++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
@@ -1691,10 +1691,10 @@ CoreGetMemorySpaceMap (
   OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
   )
 {
-  EFI_STATUS                       Status;
   LIST_ENTRY                       *Link;
   EFI_GCD_MAP_ENTRY                *Entry;
   EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
+  UINTN                            DescriptorCount;
 
   //
   // Make sure parameters are valid
@@ -1706,38 +1706,75 @@ CoreGetMemorySpaceMap (
     return EFI_INVALID_PARAMETER;
   }
 
-  CoreAcquireGcdMemoryLock ();
+  *NumberOfDescriptors  = 0;
+  *MemorySpaceMap       = NULL;
 
   //
-  // Count the number of descriptors
+  // Take the lock, for entering the loop with the lock held.
   //
-  *NumberOfDescriptors = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
+  CoreAcquireGcdMemoryLock ();
+  while (TRUE) {
+    //
+    // Count the number of descriptors. It might be done more than once because
+    // there's code which has to be running outside the GCD lock.
+    //
+    DescriptorCount = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
+    if (DescriptorCount == *NumberOfDescriptors) {
+      //
+      // Fill in the MemorySpaceMap if no memory space map change.
+      //
+      Descriptor = *MemorySpaceMap;
+      Link = mGcdMemorySpaceMap.ForwardLink;
+      while (Link != &mGcdMemorySpaceMap) {
+        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
+        BuildMemoryDescriptor (Descriptor, Entry);
+        Descriptor++;
+        Link = Link->ForwardLink;
+      }
+      //
+      // We're done; exit the loop with the lock held.
+      //
+      break;
+    }
 
-  //
-  // Allocate the MemorySpaceMap
-  //
-  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
-  if (*MemorySpaceMap == NULL) {
-    Status = EFI_OUT_OF_RESOURCES;
-    goto Done;
-  }
+    //
+    // Release the lock before memory allocation, because it might cause
+    // GCD lock conflict in one of calling path in AllocatPool().
+    //
+    CoreReleaseGcdMemoryLock ();
+
+    //
+    // Allocate memory to store the MemorySpaceMap. Note it might be already
+    // allocated if there's map descriptor change during memory allocation at
+    // last time.
+    //
+    if (*MemorySpaceMap != NULL) {
+      FreePool (*MemorySpaceMap);
+    }
 
+    *MemorySpaceMap = AllocatePool (DescriptorCount *
+                                    sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
+    if (*MemorySpaceMap == NULL) {
+      *NumberOfDescriptors = 0;
+      return EFI_OUT_OF_RESOURCES;
+    }
+
+    //
+    // Save the descriptor count got before for another round of check to make
+    // sure we won't miss any, since we have code running outside the GCD lock.
+    //
+    *NumberOfDescriptors = DescriptorCount;
+    //
+    // Re-acquire the lock, for the next iteration.
+    //
+    CoreAcquireGcdMemoryLock ();
+  }
   //
-  // Fill in the MemorySpaceMap
+  // We exited the loop with the lock held, release it.
   //
-  Descriptor = *MemorySpaceMap;
-  Link = mGcdMemorySpaceMap.ForwardLink;
-  while (Link != &mGcdMemorySpaceMap) {
-    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
-    BuildMemoryDescriptor (Descriptor, Entry);
-    Descriptor++;
-    Link = Link->ForwardLink;
-  }
-  Status = EFI_SUCCESS;
-
-Done:
   CoreReleaseGcdMemoryLock ();
-  return Status;
+
+  return EFI_SUCCESS;
 }
 
 
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 6/6] MdeModulePkg/Core: add freed-memory guard feature
  2018-10-25  7:17 [PATCH v4 0/6] Introduce freed-memory guard feature Jian J Wang
                   ` (4 preceding siblings ...)
  2018-10-25  7:18 ` [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock Jian J Wang
@ 2018-10-25  7:18 ` Jian J Wang
  2018-10-26  1:18   ` Zeng, Star
  2018-10-26  1:14 ` [PATCH v4 0/6] Introduce " Zeng, Star
  6 siblings, 1 reply; 15+ messages in thread
From: Jian J Wang @ 2018-10-25  7:18 UTC (permalink / raw)
  To: edk2-devel
  Cc: Star Zeng, Michael D Kinney, Jiewen Yao, Ruiyu Ni, Laszlo Ersek

> v4 changes:
> a. replace hard-coded memory attributes with the value got from
>    CoreGetMemorySpaceDescriptor()
> b. remove the enclosure of CoreAcquireGcdMemoryLock() and
>    CoreReleaseGcdMemoryLock() around CoreAddRange()

Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
which is illegal access to memory which has been freed. The principle
behind is similar to heap guard feature, that is we'll turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.

This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.

This feature brings another problem is that memory map descriptors will
be increased enormously (200+ -> 2000+). One of change in this patch
is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
freed pages back into the memory map. Now the number can stay at around
510.

Cc: Star Zeng <star.zeng@intel.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Ruiyu Ni <ruiyu.ni@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409 +++++++++++++++++++++++++-
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
 MdeModulePkg/Core/Dxe/Mem/Page.c              |  42 ++-
 MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
 MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
 MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
 6 files changed, 525 insertions(+), 34 deletions(-)

diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
index 663f969c0d..449a022658 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
@@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
 GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
                                     = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
 
+//
+// Used for promoting freed but not used pages.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS mLastPromotedPage = BASE_4GB;
+
 /**
   Set corresponding bits in bitmap table to 1 according to the address.
 
@@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
 
   @return An integer containing the guarded memory bitmap.
 **/
-UINTN
+UINT64
 GetGuardedMemoryBits (
   IN EFI_PHYSICAL_ADDRESS    Address,
   IN UINTN                   NumberOfPages
@@ -387,7 +392,7 @@ GetGuardedMemoryBits (
 {
   UINT64            *BitMap;
   UINTN             Bits;
-  UINTN             Result;
+  UINT64            Result;
   UINTN             Shift;
   UINTN             BitsToUnitEnd;
 
@@ -660,15 +665,16 @@ IsPageTypeToGuard (
 /**
   Check to see if the heap guard is enabled for page and/or pool allocation.
 
+  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
+
   @return TRUE/FALSE.
 **/
 BOOLEAN
 IsHeapGuardEnabled (
-  VOID
+  UINT8           GuardType
   )
 {
-  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
-                              GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
+  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages, GuardType);
 }
 
 /**
@@ -1203,6 +1209,380 @@ SetAllGuardPages (
   }
 }
 
+/**
+  Find the address of top-most guarded free page.
+
+  @param[out]  Address    Start address of top-most guarded free page.
+
+  @return VOID.
+**/
+VOID
+GetLastGuardedFreePageAddress (
+  OUT EFI_PHYSICAL_ADDRESS      *Address
+  )
+{
+  EFI_PHYSICAL_ADDRESS    AddressGranularity;
+  EFI_PHYSICAL_ADDRESS    BaseAddress;
+  UINTN                   Level;
+  UINT64                  Map;
+  INTN                    Index;
+
+  ASSERT (mMapLevel >= 1);
+
+  BaseAddress = 0;
+  Map = mGuardedMemoryMap;
+  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
+       ++Level) {
+    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
+
+    //
+    // Find the non-NULL entry at largest index.
+    //
+    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
+      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
+        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
+        Map = ((UINT64 *)(UINTN)Map)[Index];
+        break;
+      }
+    }
+  }
+
+  //
+  // Find the non-zero MSB then get the page address.
+  //
+  while (Map != 0) {
+    Map = RShiftU64 (Map, 1);
+    BaseAddress += EFI_PAGES_TO_SIZE (1);
+  }
+
+  *Address = BaseAddress;
+}
+
+/**
+  Record freed pages.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+MarkFreedPages (
+  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
+  IN UINTN                    Pages
+  )
+{
+  SetGuardedMemoryBits (BaseAddress, Pages);
+}
+
+/**
+  Record freed pages as well as mark them as not-present.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPages (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  )
+{
+  EFI_STATUS      Status;
+
+  //
+  // Legacy memory lower than 1MB might be accessed with no allocation. Leave
+  // them alone.
+  //
+  if (BaseAddress < BASE_1MB) {
+    return;
+  }
+
+  MarkFreedPages (BaseAddress, Pages);
+  if (gCpu != NULL) {
+    //
+    // Set flag to make sure allocating memory without GUARD for page table
+    // operation; otherwise infinite loops could be caused.
+    //
+    mOnGuarding = TRUE;
+    //
+    // Note: This might overwrite other attributes needed by other features,
+    // such as NX memory protection.
+    //
+    Status = gCpu->SetMemoryAttributes (
+                     gCpu,
+                     BaseAddress,
+                     EFI_PAGES_TO_SIZE (Pages),
+                     EFI_MEMORY_RP
+                     );
+    //
+    // Normally we should ASSERT the returned Status. But there might be memory
+    // alloc/free involved in SetMemoryAttributes(), which might fail this
+    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
+    //
+    if (EFI_ERROR (Status)) {
+      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%lu)\n", BaseAddress, (UINT64)Pages));
+    }
+    mOnGuarding = FALSE;
+  }
+}
+
+/**
+  Record freed pages as well as mark them as not-present, if enabled.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPagesChecked (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  )
+{
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    GuardFreedPages (BaseAddress, Pages);
+  }
+}
+
+/**
+  Mark all pages freed before CPU Arch Protocol as not-present.
+
+**/
+VOID
+GuardAllFreedPages (
+  VOID
+  )
+{
+  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    TableEntry;
+  UINT64    Address;
+  UINT64    GuardPage;
+  INTN      Level;
+  UINTN     BitIndex;
+  UINTN     GuardPageNumber;
+
+  if (mGuardedMemoryMap == 0 ||
+      mMapLevel == 0 ||
+      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
+    return;
+  }
+
+  CopyMem (Entries, mLevelMask, sizeof (Entries));
+  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
+
+  SetMem (Tables, sizeof(Tables), 0);
+  SetMem (Addresses, sizeof(Addresses), 0);
+  SetMem (Indices, sizeof(Indices), 0);
+
+  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+  Tables[Level]   = mGuardedMemoryMap;
+  Address         = 0;
+  GuardPage       = (UINT64)-1;
+  GuardPageNumber = 0;
+
+  while (TRUE) {
+    if (Indices[Level] > Entries[Level]) {
+      Tables[Level] = 0;
+      Level        -= 1;
+    } else {
+      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
+      Address     = Addresses[Level];
+
+      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+        Level            += 1;
+        Tables[Level]     = TableEntry;
+        Addresses[Level]  = Address;
+        Indices[Level]    = 0;
+
+        continue;
+      } else {
+        BitIndex = 1;
+        while (BitIndex != 0) {
+          if ((TableEntry & BitIndex) != 0) {
+            if (GuardPage == (UINT64)-1) {
+              GuardPage = Address;
+            }
+            ++GuardPageNumber;
+          } else if (GuardPageNumber > 0) {
+            GuardFreedPages (GuardPage, GuardPageNumber);
+            GuardPageNumber = 0;
+            GuardPage       = (UINT64)-1;
+          }
+
+          if (TableEntry == 0) {
+            break;
+          }
+
+          Address += EFI_PAGES_TO_SIZE (1);
+          BitIndex = LShiftU64 (BitIndex, 1);
+        }
+      }
+    }
+
+    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
+      break;
+    }
+
+    Indices[Level] += 1;
+    Address = (Level == 0) ? 0 : Addresses[Level - 1];
+    Addresses[Level] = Address | LShiftU64 (Indices[Level], Shifts[Level]);
+
+  }
+
+  //
+  // Update the maximum address of freed page which can be used for memory
+  // promotion upon out-of-memory-space.
+  //
+  GetLastGuardedFreePageAddress (&Address);
+  if (Address != 0) {
+    mLastPromotedPage = Address;
+  }
+}
+
+/**
+  This function checks to see if the given memory map descriptor in a memory map
+  can be merged with any guarded free pages.
+
+  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
+  @param  MaxAddress        Maximum address to stop the merge.
+
+  @return VOID
+
+**/
+VOID
+MergeGuardPages (
+  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
+  IN EFI_PHYSICAL_ADDRESS       MaxAddress
+  )
+{
+  EFI_PHYSICAL_ADDRESS        EndAddress;
+  UINT64                      Bitmap;
+  INTN                        Pages;
+
+  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
+      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
+    return;
+  }
+
+  Bitmap = 0;
+  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry->PhysicalStart);
+  Pages -= MemoryMapEntry->NumberOfPages;
+  while (Pages > 0) {
+    if (Bitmap == 0) {
+      EndAddress = MemoryMapEntry->PhysicalStart +
+                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
+      Bitmap = GetGuardedMemoryBits (EndAddress, GUARDED_HEAP_MAP_ENTRY_BITS);
+    }
+
+    if ((Bitmap & 1) == 0) {
+      break;
+    }
+
+    Pages--;
+    MemoryMapEntry->NumberOfPages++;
+    Bitmap = RShiftU64 (Bitmap, 1);
+  }
+}
+
+/**
+  Put part (at most 64 pages a time) guarded free pages back to free page pool.
+
+  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
+  makes use of 'Used then throw away' way to detect any illegal access to freed
+  memory. The thrown-away memory will be marked as not-present so that any access
+  to those memory (after free) will be caught by page-fault exception.
+
+  The problem is that this will consume lots of memory space. Once no memory
+  left in pool to allocate, we have to restore part of the freed pages to their
+  normal function. Otherwise the whole system will stop functioning.
+
+  @param  StartAddress    Start address of promoted memory.
+  @param  EndAddress      End address of promoted memory.
+
+  @return TRUE    Succeeded to promote memory.
+  @return FALSE   No free memory found.
+
+**/
+BOOLEAN
+PromoteGuardedFreePages (
+  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
+  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
+  )
+{
+  EFI_STATUS              Status;
+  UINTN                   AvailablePages;
+  UINT64                  Bitmap;
+  EFI_PHYSICAL_ADDRESS    Start;
+
+  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    return FALSE;
+  }
+
+  //
+  // Similar to memory allocation service, always search the freed pages in
+  // descending direction.
+  //
+  Start           = mLastPromotedPage;
+  AvailablePages  = 0;
+  while (AvailablePages == 0) {
+    Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
+    //
+    // If the address wraps around, try the really freed pages at top.
+    //
+    if (Start > mLastPromotedPage) {
+      GetLastGuardedFreePageAddress (&Start);
+      ASSERT (Start != 0);
+      Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
+    }
+
+    Bitmap = GetGuardedMemoryBits (Start, GUARDED_HEAP_MAP_ENTRY_BITS);
+    while (Bitmap > 0) {
+      if ((Bitmap & 1) != 0) {
+        ++AvailablePages;
+      } else if (AvailablePages == 0) {
+        Start += EFI_PAGES_TO_SIZE (1);
+      } else {
+        break;
+      }
+
+      Bitmap = RShiftU64 (Bitmap, 1);
+    }
+  }
+
+  if (AvailablePages) {
+    DEBUG ((DEBUG_INFO, "Promoted pages: %lX (%lx)\r\n", Start, (UINT64)AvailablePages));
+    ClearGuardedMemoryBits (Start, AvailablePages);
+
+    if (gCpu != NULL) {
+      //
+      // Set flag to make sure allocating memory without GUARD for page table
+      // operation; otherwise infinite loops could be caused.
+      //
+      mOnGuarding = TRUE;
+      Status = gCpu->SetMemoryAttributes (gCpu, Start, EFI_PAGES_TO_SIZE(AvailablePages), 0);
+      ASSERT_EFI_ERROR (Status);
+      mOnGuarding = FALSE;
+    }
+
+    mLastPromotedPage = Start;
+    *StartAddress     = Start;
+    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
+    return TRUE;
+  }
+
+  return FALSE;
+}
+
 /**
   Notify function used to set all Guard pages before CPU Arch Protocol installed.
 **/
@@ -1212,7 +1592,20 @@ HeapGuardCpuArchProtocolNotify (
   )
 {
   ASSERT (gCpu != NULL);
-  SetAllGuardPages ();
+
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL) &&
+      IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    DEBUG ((DEBUG_ERROR, "Heap guard and freed memory guard cannot be enabled at the same time.\n"));
+    CpuDeadLoop ();
+  }
+
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
+    SetAllGuardPages ();
+  }
+
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
+    GuardAllFreedPages ();
+  }
 }
 
 /**
@@ -1264,6 +1657,10 @@ DumpGuardedMemoryBitmap (
   CHAR8     *Ruler1;
   CHAR8     *Ruler2;
 
+  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_ALL)) {
+    return;
+  }
+
   if (mGuardedMemoryMap == 0 ||
       mMapLevel == 0 ||
       mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
index 8c34692439..55a91ec098 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
@@ -1,7 +1,7 @@
 /** @file
   Data type, macros and function prototypes of heap guard feature.
 
-Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017-2018, Intel Corporation. All rights reserved.<BR>
 This program and the accompanying materials
 are licensed and made available under the terms and conditions of the BSD License
 which accompanies this distribution.  The full text of the license may be found at
@@ -160,6 +160,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 //
 #define GUARD_HEAP_TYPE_PAGE        BIT0
 #define GUARD_HEAP_TYPE_POOL        BIT1
+#define GUARD_HEAP_TYPE_FREED       BIT4
+#define GUARD_HEAP_TYPE_ALL         \
+        (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_FREED)
 
 //
 // Debug message level
@@ -392,11 +395,13 @@ AdjustPoolHeadF (
 /**
   Check to see if the heap guard is enabled for page and/or pool allocation.
 
+  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
+
   @return TRUE/FALSE.
 **/
 BOOLEAN
 IsHeapGuardEnabled (
-  VOID
+  UINT8           GuardType
   );
 
 /**
@@ -407,6 +412,62 @@ HeapGuardCpuArchProtocolNotify (
   VOID
   );
 
+/**
+  This function checks to see if the given memory map descriptor in a memory map
+  can be merged with any guarded free pages.
+
+  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
+  @param  MaxAddress        Maximum address to stop the merge.
+
+  @return VOID
+
+**/
+VOID
+MergeGuardPages (
+  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
+  IN EFI_PHYSICAL_ADDRESS       MaxAddress
+  );
+
+/**
+  Record freed pages as well as mark them as not-present, if enabled.
+
+  @param[in]  BaseAddress   Base address of just freed pages.
+  @param[in]  Pages         Number of freed pages.
+
+  @return VOID.
+**/
+VOID
+EFIAPI
+GuardFreedPagesChecked (
+  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
+  IN  UINTN                   Pages
+  );
+
+/**
+  Put part (at most 64 pages a time) guarded free pages back to free page pool.
+
+  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
+  makes use of 'Used then throw away' way to detect any illegal access to freed
+  memory. The thrown-away memory will be marked as not-present so that any access
+  to those memory (after free) will be caught by page-fault exception.
+
+  The problem is that this will consume lots of memory space. Once no memory
+  left in pool to allocate, we have to restore part of the freed pages to their
+  normal function. Otherwise the whole system will stop functioning.
+
+  @param  StartAddress    Start address of promoted memory.
+  @param  EndAddress      End address of promoted memory.
+
+  @return TRUE    Succeeded to promote memory.
+  @return FALSE   No free memory found.
+
+**/
+BOOLEAN
+PromoteGuardedFreePages (
+  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
+  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
+  );
+
 extern BOOLEAN mOnGuarding;
 
 #endif
diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c
index 3b4cc08e7c..961c5b8335 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Page.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
@@ -401,9 +401,12 @@ PromoteMemoryResource (
   VOID
   )
 {
-  LIST_ENTRY         *Link;
-  EFI_GCD_MAP_ENTRY  *Entry;
-  BOOLEAN            Promoted;
+  LIST_ENTRY                        *Link;
+  EFI_GCD_MAP_ENTRY                 *Entry;
+  BOOLEAN                           Promoted;
+  EFI_PHYSICAL_ADDRESS              StartAddress;
+  EFI_PHYSICAL_ADDRESS              EndAddress;
+  EFI_GCD_MEMORY_SPACE_DESCRIPTOR   Descriptor;
 
   DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
 
@@ -451,6 +454,24 @@ PromoteMemoryResource (
 
   CoreReleaseGcdMemoryLock ();
 
+  if (!Promoted) {
+    //
+    // If freed-memory guard is enabled, we could promote pages from
+    // guarded free pages.
+    //
+    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
+    if (Promoted) {
+      CoreGetMemorySpaceDescriptor (StartAddress, &Descriptor);
+      CoreAddRange (
+        EfiConventionalMemory,
+        StartAddress,
+        EndAddress,
+        Descriptor.Capabilities & ~(EFI_MEMORY_PRESENT | EFI_MEMORY_INITIALIZED |
+                                    EFI_MEMORY_TESTED | EFI_MEMORY_RUNTIME)
+        );
+    }
+  }
+
   return Promoted;
 }
 /**
@@ -896,9 +917,15 @@ CoreConvertPagesEx (
     }
 
     //
-    // Add our new range in
+    // Add our new range in. Don't do this for freed pages if freed-memory
+    // guard is enabled.
     //
-    CoreAddRange (MemType, Start, RangeEnd, Attribute);
+    if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
+        !ChangingType ||
+        MemType != EfiConventionalMemory) {
+      CoreAddRange (MemType, Start, RangeEnd, Attribute);
+    }
+
     if (ChangingType && (MemType == EfiConventionalMemory)) {
       //
       // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
@@ -1514,6 +1541,7 @@ CoreFreePages (
 
   Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
   if (!EFI_ERROR (Status)) {
+    GuardFreedPagesChecked (Memory, NumberOfPages);
     CoreUpdateProfile (
       (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
       MemoryProfileActionFreePages,
@@ -1908,9 +1936,7 @@ Done:
   *MemoryMapSize = BufferSize;
 
   DEBUG_CODE (
-    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
-      DumpGuardedMemoryBitmap ();
-    }
+    DumpGuardedMemoryBitmap ();
   );
 
   return Status;
diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c
index 1ff2061f7f..b9182ea807 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
@@ -1,7 +1,7 @@
 /** @file
   UEFI Memory pool management functions.
 
-Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
 This program and the accompanying materials
 are licensed and made available under the terms and conditions of the BSD License
 which accompanies this distribution.  The full text of the license may be found at
@@ -26,7 +26,8 @@ typedef struct {
 } POOL_FREE;
 
 
-#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
+#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
+#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
 typedef struct {
   UINT32          Signature;
   UINT32          Reserved;
@@ -367,6 +368,7 @@ CoreAllocatePoolI (
   UINTN       NoPages;
   UINTN       Granularity;
   BOOLEAN     HasPoolTail;
+  BOOLEAN     PageAsPool;
 
   ASSERT_LOCKED (&mPoolMemoryLock);
 
@@ -386,6 +388,7 @@ CoreAllocatePoolI (
 
   HasPoolTail  = !(NeedGuard &&
                    ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+  PageAsPool = (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) && !mOnGuarding);
 
   //
   // Adjusting the Size to be of proper alignment so that
@@ -406,7 +409,7 @@ CoreAllocatePoolI (
   // If allocation is over max size, just allocate pages for the request
   // (slow)
   //
-  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
+  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
     if (!HasPoolTail) {
       Size -= sizeof (POOL_TAIL);
     }
@@ -498,7 +501,7 @@ Done:
     //
     // If we have a pool buffer, fill in the header & tail info
     //
-    Head->Signature = POOL_HEAD_SIGNATURE;
+    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE : POOL_HEAD_SIGNATURE;
     Head->Size      = Size;
     Head->Type      = (EFI_MEMORY_TYPE) PoolType;
     Buffer          = Head->Data;
@@ -615,6 +618,7 @@ CoreFreePoolPagesI (
   CoreFreePoolPages (Memory, NoPages);
   CoreReleaseMemoryLock ();
 
+  GuardFreedPagesChecked (Memory, NoPages);
   ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
     (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
 }
@@ -685,15 +689,19 @@ CoreFreePoolI (
   UINTN       Granularity;
   BOOLEAN     IsGuarded;
   BOOLEAN     HasPoolTail;
+  BOOLEAN     PageAsPool;
 
   ASSERT(Buffer != NULL);
   //
   // Get the head & tail of the pool entry
   //
-  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
+  Head = BASE_CR (Buffer, POOL_HEAD, Data);
   ASSERT(Head != NULL);
 
-  if (Head->Signature != POOL_HEAD_SIGNATURE) {
+  if (Head->Signature != POOL_HEAD_SIGNATURE &&
+      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
+    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
+            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
     return EFI_INVALID_PARAMETER;
   }
 
@@ -701,6 +709,7 @@ CoreFreePoolI (
                 IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
   HasPoolTail = !(IsGuarded &&
                   ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
 
   if (HasPoolTail) {
     Tail = HEAD_TO_TAIL (Head);
@@ -757,7 +766,7 @@ CoreFreePoolI (
   //
   // If it's not on the list, it must be pool pages
   //
-  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
+  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
 
     //
     // Return the memory pages back to free memory
diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
index fa8f8fe91a..6298b67db1 100644
--- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
+++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
@@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
   // Don't overwrite Guard pages, which should be the first and/or last page,
   // if any.
   //
-  if (IsHeapGuardEnabled ()) {
+  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
     if (IsGuardPage (Memory))  {
       Memory += EFI_PAGE_SIZE;
       Length -= EFI_PAGE_SIZE;
diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
index 05eb4f422b..04cfb2dab2 100644
--- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
+++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
@@ -1,7 +1,7 @@
 /** @file
   UEFI PropertiesTable support
 
-Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR>
 This program and the accompanying materials
 are licensed and made available under the terms and conditions of the BSD License
 which accompanies this distribution.  The full text of the license may be found at
@@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 #include <Guid/PropertiesTable.h>
 
 #include "DxeMain.h"
+#include "HeapGuard.h"
 
 #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
   ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
@@ -205,16 +206,13 @@ MergeMemoryMap (
     NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
 
     do {
-      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry->NumberOfPages));
+      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry->PhysicalStart);
+      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry->NumberOfPages));
       if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
-          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
-          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
-          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
-        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
-        if (NewMemoryMapEntry != MemoryMapEntry) {
-          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
-        }
-
+          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
+          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
+          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
+        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
         NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (NextMemoryMapEntry, DescriptorSize);
         continue;
       } else {
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock
  2018-10-25  7:18 ` [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock Jian J Wang
@ 2018-10-25  7:20   ` Wang, Jian J
  2018-10-26  1:17     ` Zeng, Star
  0 siblings, 1 reply; 15+ messages in thread
From: Wang, Jian J @ 2018-10-25  7:20 UTC (permalink / raw)
  To: edk2-devel, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Zeng, Star,
	Laszlo Ersek

Sorry, forgot to update commit message:

> This issue is hidden in current code but exposed by introduction
> of freed-memory guard feature due to the fact that the feature
> will turn all pool allocation to page allocation.
> 
> The solution is moving the memory allocation in CoreGetMemorySpaceMap()
> to be out of the GCD memory map lock.

Regards,
Jian


> -----Original Message-----
> From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org]
> Sent: Thursday, October 25, 2018 3:18 PM
> To: edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Zeng, Star
> <star.zeng@intel.com>; Laszlo Ersek <lersek@redhat.com>
> Subject: [edk2] [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD
> memory lock
> 
> > v4 changes:
> > a. add more comments from Laszlo
> 
> This issue is hidden in current code but exposed by introduction
> of freed-memory guard feature due to the fact that the feature
> will turn all pool allocation to page allocation.
> 
> The solution is move the memory allocation in CoreGetMemorySpaceMap()
> and CoreGetIoSpaceMap() to be out of the GCD memory map lock.
> 
>    CoreDumpGcdMemorySpaceMap()
> => CoreGetMemorySpaceMap()
> => CoreAcquireGcdMemoryLock () *
>    AllocatePool()
> => InternalAllocatePool()
> => CoreAllocatePool()
> => CoreAllocatePoolI()
> => CoreAllocatePoolPagesI()
> => CoreAllocatePoolPages()
> => FindFreePages()
> => PromoteMemoryResource()
> => CoreAcquireGcdMemoryLock()  **
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 87 +++++++++++++++++++++++++++++-
> -----------
>  1 file changed, 62 insertions(+), 25 deletions(-)
> 
> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> index d9c65a8045..f99d6bb933 100644
> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> @@ -1691,10 +1691,10 @@ CoreGetMemorySpaceMap (
>    OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
>    )
>  {
> -  EFI_STATUS                       Status;
>    LIST_ENTRY                       *Link;
>    EFI_GCD_MAP_ENTRY                *Entry;
>    EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
> +  UINTN                            DescriptorCount;
> 
>    //
>    // Make sure parameters are valid
> @@ -1706,38 +1706,75 @@ CoreGetMemorySpaceMap (
>      return EFI_INVALID_PARAMETER;
>    }
> 
> -  CoreAcquireGcdMemoryLock ();
> +  *NumberOfDescriptors  = 0;
> +  *MemorySpaceMap       = NULL;
> 
>    //
> -  // Count the number of descriptors
> +  // Take the lock, for entering the loop with the lock held.
>    //
> -  *NumberOfDescriptors = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
> +  CoreAcquireGcdMemoryLock ();
> +  while (TRUE) {
> +    //
> +    // Count the number of descriptors. It might be done more than once
> because
> +    // there's code which has to be running outside the GCD lock.
> +    //
> +    DescriptorCount = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
> +    if (DescriptorCount == *NumberOfDescriptors) {
> +      //
> +      // Fill in the MemorySpaceMap if no memory space map change.
> +      //
> +      Descriptor = *MemorySpaceMap;
> +      Link = mGcdMemorySpaceMap.ForwardLink;
> +      while (Link != &mGcdMemorySpaceMap) {
> +        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
> +        BuildMemoryDescriptor (Descriptor, Entry);
> +        Descriptor++;
> +        Link = Link->ForwardLink;
> +      }
> +      //
> +      // We're done; exit the loop with the lock held.
> +      //
> +      break;
> +    }
> 
> -  //
> -  // Allocate the MemorySpaceMap
> -  //
> -  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof
> (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
> -  if (*MemorySpaceMap == NULL) {
> -    Status = EFI_OUT_OF_RESOURCES;
> -    goto Done;
> -  }
> +    //
> +    // Release the lock before memory allocation, because it might cause
> +    // GCD lock conflict in one of calling path in AllocatPool().
> +    //
> +    CoreReleaseGcdMemoryLock ();
> +
> +    //
> +    // Allocate memory to store the MemorySpaceMap. Note it might be already
> +    // allocated if there's map descriptor change during memory allocation at
> +    // last time.
> +    //
> +    if (*MemorySpaceMap != NULL) {
> +      FreePool (*MemorySpaceMap);
> +    }
> 
> +    *MemorySpaceMap = AllocatePool (DescriptorCount *
> +                                    sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
> +    if (*MemorySpaceMap == NULL) {
> +      *NumberOfDescriptors = 0;
> +      return EFI_OUT_OF_RESOURCES;
> +    }
> +
> +    //
> +    // Save the descriptor count got before for another round of check to make
> +    // sure we won't miss any, since we have code running outside the GCD lock.
> +    //
> +    *NumberOfDescriptors = DescriptorCount;
> +    //
> +    // Re-acquire the lock, for the next iteration.
> +    //
> +    CoreAcquireGcdMemoryLock ();
> +  }
>    //
> -  // Fill in the MemorySpaceMap
> +  // We exited the loop with the lock held, release it.
>    //
> -  Descriptor = *MemorySpaceMap;
> -  Link = mGcdMemorySpaceMap.ForwardLink;
> -  while (Link != &mGcdMemorySpaceMap) {
> -    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
> -    BuildMemoryDescriptor (Descriptor, Entry);
> -    Descriptor++;
> -    Link = Link->ForwardLink;
> -  }
> -  Status = EFI_SUCCESS;
> -
> -Done:
>    CoreReleaseGcdMemoryLock ();
> -  return Status;
> +
> +  return EFI_SUCCESS;
>  }
> 
> 
> --
> 2.16.2.windows.1
> 
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode
  2018-10-25  7:18 ` [PATCH v4 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode Jian J Wang
@ 2018-10-26  0:38   ` Dong, Eric
  0 siblings, 0 replies; 15+ messages in thread
From: Dong, Eric @ 2018-10-26  0:38 UTC (permalink / raw)
  To: Wang, Jian J, edk2-devel@lists.01.org
  Cc: Laszlo Ersek, Zeng, Star, Kinney, Michael D, Yao, Jiewen,
	Ni, Ruiyu

Reviewed-by: Eric Dong <eric.dong@intel.com>

> -----Original Message-----
> From: Wang, Jian J
> Sent: Thursday, October 25, 2018 3:18 PM
> To: edk2-devel@lists.01.org
> Cc: Dong, Eric <eric.dong@intel.com>; Laszlo Ersek <lersek@redhat.com>;
> Zeng, Star <star.zeng@intel.com>; Kinney, Michael D
> <michael.d.kinney@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Ni,
> Ruiyu <ruiyu.ni@intel.com>
> Subject: [PATCH v4 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard
> in non-stop mode
> 
> > v4 changes: none
> 
> Non-stop mode was introduced / explained in commit 8f2613628acf
> ("MdeModulePkg/MdeModulePkg.dec: add new settings for PCDs", 2018-
> 08-30).
> 
> The macro HEAP_GUARD_NONSTOP_MODE was added to CpuDxe in commit
> dcc026217fdc ("UefiCpuPkg/CpuDxe: implement non-stop mode for uefi",
> 2018-08-30).
> 
> Another instance of the macro HEAP_GUARD_NONSTOP_MODE was added
> to PiSmmCpuDxeSmm -- with BIT1|BIT0 replaced with BIT3|BIT2 -- in commit
> 09afd9a42a7f ("UefiCpuPkg/PiSmmCpuDxeSmm: implement non-stop mode
> for SMM", 2018-08-30)
> 
> Since the freed-memory guard is for UEFI-only. This patch only updates
> HEAP_GUARD_NONSTOP_MODE in "UefiCpuPkg/CpuDxe/CpuDxe.h" (add
> BIT4).
> 
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  UefiCpuPkg/CpuDxe/CpuDxe.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/UefiCpuPkg/CpuDxe/CpuDxe.h b/UefiCpuPkg/CpuDxe/CpuDxe.h
> index 064ea05bba..3183a3f7f4 100644
> --- a/UefiCpuPkg/CpuDxe/CpuDxe.h
> +++ b/UefiCpuPkg/CpuDxe/CpuDxe.h
> @@ -58,7 +58,7 @@
>                                         )
> 
>  #define HEAP_GUARD_NONSTOP_MODE       \
> -        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT1|BIT0)) > BIT6)
> +        ((PcdGet8 (PcdHeapGuardPropertyMask) & (BIT6|BIT4|BIT1|BIT0)) >
> + BIT6)
> 
>  #define NULL_DETECTION_NONSTOP_MODE   \
>          ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & (BIT6|BIT0)) > BIT6)
> --
> 2.16.2.windows.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool
  2018-10-25  7:18 ` [PATCH v4 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool Jian J Wang
@ 2018-10-26  0:38   ` Dong, Eric
  0 siblings, 0 replies; 15+ messages in thread
From: Dong, Eric @ 2018-10-26  0:38 UTC (permalink / raw)
  To: Wang, Jian J, edk2-devel@lists.01.org
  Cc: Laszlo Ersek, Zeng, Star, Kinney, Michael D, Yao, Jiewen,
	Ni, Ruiyu

Reviewed-by: Eric Dong <eric.dong@intel.com>

> -----Original Message-----
> From: Wang, Jian J
> Sent: Thursday, October 25, 2018 3:18 PM
> To: edk2-devel@lists.01.org
> Cc: Dong, Eric <eric.dong@intel.com>; Laszlo Ersek <lersek@redhat.com>;
> Zeng, Star <star.zeng@intel.com>; Kinney, Michael D
> <michael.d.kinney@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Ni,
> Ruiyu <ruiyu.ni@intel.com>
> Subject: [PATCH v4 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of
> InitializePageTablePool
> 
> > v4 changes: none
> 
> The freed-memory guard feature will cause an recursive calling of
> InitializePageTablePool(). This is due to a fact that
> AllocateAlignedPages() is used to allocate page table pool memory.
> This function will most likely call gBS->FreePages to free unaligned pages and
> then cause another round of page attributes change, like below (freed pages
> will be always marked not-present if freed-memory guard is enabled)
> 
>    FreePages() <===============|
> => CpuSetMemoryAttributes()    |
> => <if out of page table>      |
> => InitializePageTablePool()   |
> => AllocateAlignedPages()      |
> => FreePages() ================|
> 
> The solution is add a global variable as a lock in page table pool allocation
> function and fail any other requests if it has not been done.
> 
> Since this issue will only happen if free-memory guard is enabled, it won't
> affect CpuSetMemoryAttributes() in default build of a BIOS.
> 
> If free-memory guard is enabled, it only affect the pages (failed to mark
> them as not-present) freed in AllocateAlignedPages().
> 
> Since those freed pages haven't been used yet (their addresses not yet
> exposed to code outside AllocateAlignedPages), it won't compromise the
> freed-memory guard feature.
> 
> This change will just fail the CpuSetMemoryAttributes() called from
> FreePages() but it won't fail the FreePages(). So the error status won't be
> propagated upper layer of code.
> 
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  UefiCpuPkg/CpuDxe/CpuPageTable.c | 23 +++++++++++++++++++++--
>  1 file changed, 21 insertions(+), 2 deletions(-)
> 
> diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c
> b/UefiCpuPkg/CpuDxe/CpuPageTable.c
> index 33e8ee2d2c..4bee8c7772 100644
> --- a/UefiCpuPkg/CpuDxe/CpuPageTable.c
> +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c
> @@ -100,6 +100,7 @@ PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {  };
> 
>  PAGE_TABLE_POOL                   *mPageTablePool = NULL;
> +BOOLEAN                           mPageTablePoolLock = FALSE;
>  PAGE_TABLE_LIB_PAGING_CONTEXT     mPagingContext;
>  EFI_SMM_BASE2_PROTOCOL            *mSmmBase2 = NULL;
> 
> @@ -1046,6 +1047,16 @@ InitializePageTablePool (
>    VOID                      *Buffer;
>    BOOLEAN                   IsModified;
> 
> +  //
> +  // Do not allow re-entrance.
> +  //
> +  if (mPageTablePoolLock) {
> +    return FALSE;
> +  }
> +
> +  mPageTablePoolLock = TRUE;
> +  IsModified = FALSE;
> +
>    //
>    // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one
> page for
>    // header.
> @@ -1056,9 +1067,15 @@ InitializePageTablePool (
>    Buffer = AllocateAlignedPages (PoolPages,
> PAGE_TABLE_POOL_ALIGNMENT);
>    if (Buffer == NULL) {
>      DEBUG ((DEBUG_ERROR, "ERROR: Out of aligned pages\r\n"));
> -    return FALSE;
> +    goto Done;
>    }
> 
> +  DEBUG ((
> +    DEBUG_INFO,
> +    "Paging: added %lu pages to page table pool\r\n",
> +    (UINT64)PoolPages
> +    ));
> +
>    //
>    // Link all pools into a list for easier track later.
>    //
> @@ -1092,7 +1109,9 @@ InitializePageTablePool (
>      );
>    ASSERT (IsModified == TRUE);
> 
> -  return TRUE;
> +Done:
> +  mPageTablePoolLock = FALSE;
> +  return IsModified;
>  }
> 
>  /**
> --
> 2.16.2.windows.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 0/6] Introduce freed-memory guard feature
  2018-10-25  7:17 [PATCH v4 0/6] Introduce freed-memory guard feature Jian J Wang
                   ` (5 preceding siblings ...)
  2018-10-25  7:18 ` [PATCH v4 6/6] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
@ 2018-10-26  1:14 ` Zeng, Star
  6 siblings, 0 replies; 15+ messages in thread
From: Zeng, Star @ 2018-10-26  1:14 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: star.zeng

On 2018/10/25 15:17, Jian J Wang wrote:
>> v4 changes:
>> Updated per comments from Star. Please refer to individual patch
>> file for details (#2/5/6)

Minor comments to patch 5 and 6, please see the individual feedback.
With them addressed, Reviewed-by: Star Zeng <star.zeng@intel.com> to 
patch 1, 2, 5 and 6.

And remember to add RB from Laszlo, I think at least you can add RB from 
Laszlo for patch 1, maybe patch 2 about MdeModulePkg change.


Thanks,
Star

> 
> Freed-memory guard is a new feauture used to detect UAF (Use-After-Free)
> memory issue.
> 
> Tests:
> a. Feature basic unit/functionality test
> b. OVMF regression test
> 
> Jian J Wang (6):
>    MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation
>    MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD
>    UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode
>    UefiCpuPkg/CpuDxe: prevent recursive calling of
>      InitializePageTablePool
>    MdeModulePkg/Core: prevent re-acquire GCD memory lock
>    MdeModulePkg/Core: add freed-memory guard feature
> 
>   MdeModulePkg/Core/Dxe/Gcd/Gcd.c               |  87 ++++--
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409 +++++++++++++++++++++++++-
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
>   MdeModulePkg/Core/Dxe/Mem/Page.c              |  42 ++-
>   MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
>   MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
>   MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
>   MdeModulePkg/MdeModulePkg.dec                 |  20 +-
>   MdeModulePkg/MdeModulePkg.uni                 |  16 +-
>   UefiCpuPkg/CpuDxe/CpuDxe.h                    |   2 +-
>   UefiCpuPkg/CpuDxe/CpuPageTable.c              |  23 +-
>   11 files changed, 637 insertions(+), 70 deletions(-)
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock
  2018-10-25  7:20   ` Wang, Jian J
@ 2018-10-26  1:17     ` Zeng, Star
  2018-10-26  1:19       ` Wang, Jian J
  0 siblings, 1 reply; 15+ messages in thread
From: Zeng, Star @ 2018-10-26  1:17 UTC (permalink / raw)
  To: Wang, Jian J, edk2-devel, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek,
	star.zeng

On 2018/10/25 15:20, Wang, Jian J wrote:
> Sorry, forgot to update commit message:
> 
>> This issue is hidden in current code but exposed by introduction
>> of freed-memory guard feature due to the fact that the feature
>> will turn all pool allocation to page allocation.
>>
>> The solution is moving the memory allocation in CoreGetMemorySpaceMap()
>> to be out of the GCD memory map lock.
> 
> Regards,
> Jian
> 
> 
>> -----Original Message-----
>> From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org]
>> Sent: Thursday, October 25, 2018 3:18 PM
>> To: edk2-devel@lists.01.org
>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Zeng, Star
>> <star.zeng@intel.com>; Laszlo Ersek <lersek@redhat.com>
>> Subject: [edk2] [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD
>> memory lock
>>
>>> v4 changes:
>>> a. add more comments from Laszlo
>>
>> This issue is hidden in current code but exposed by introduction
>> of freed-memory guard feature due to the fact that the feature
>> will turn all pool allocation to page allocation.
>>
>> The solution is move the memory allocation in CoreGetMemorySpaceMap()
>> and CoreGetIoSpaceMap() to be out of the GCD memory map lock.
>>
>>     CoreDumpGcdMemorySpaceMap()
>> => CoreGetMemorySpaceMap()
>> => CoreAcquireGcdMemoryLock () *
>>     AllocatePool()
>> => InternalAllocatePool()
>> => CoreAllocatePool()
>> => CoreAllocatePoolI()
>> => CoreAllocatePoolPagesI()
>> => CoreAllocatePoolPages()
>> => FindFreePages()
>> => PromoteMemoryResource()
>> => CoreAcquireGcdMemoryLock()  **
>>
>> Cc: Star Zeng <star.zeng@intel.com>
>> Cc: Michael D Kinney <michael.d.kinney@intel.com>
>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
>> Cc: Laszlo Ersek <lersek@redhat.com>
>> Contributed-under: TianoCore Contribution Agreement 1.1
>> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
>> ---
>>   MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 87 +++++++++++++++++++++++++++++-
>> -----------
>>   1 file changed, 62 insertions(+), 25 deletions(-)
>>
>> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>> index d9c65a8045..f99d6bb933 100644
>> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>> @@ -1691,10 +1691,10 @@ CoreGetMemorySpaceMap (
>>     OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
>>     )
>>   {
>> -  EFI_STATUS                       Status;
>>     LIST_ENTRY                       *Link;
>>     EFI_GCD_MAP_ENTRY                *Entry;
>>     EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
>> +  UINTN                            DescriptorCount;
>>
>>     //
>>     // Make sure parameters are valid
>> @@ -1706,38 +1706,75 @@ CoreGetMemorySpaceMap (
>>       return EFI_INVALID_PARAMETER;
>>     }
>>
>> -  CoreAcquireGcdMemoryLock ();
>> +  *NumberOfDescriptors  = 0;
>> +  *MemorySpaceMap       = NULL;
>>
>>     //
>> -  // Count the number of descriptors
>> +  // Take the lock, for entering the loop with the lock held.
>>     //
>> -  *NumberOfDescriptors = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
>> +  CoreAcquireGcdMemoryLock ();
>> +  while (TRUE) {
>> +    //
>> +    // Count the number of descriptors. It might be done more than once

How about using "Count the descriptors" here? No need to send new patch 
series for it.

>> because
>> +    // there's code which has to be running outside the GCD lock.

I have given comment for this line at V3 series.
How about using "AllocatePool() calling code below" instead of "there's 
code" to be more specific?

Thanks,
Star

>> +    //
>> +    DescriptorCount = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
>> +    if (DescriptorCount == *NumberOfDescriptors) {
>> +      //
>> +      // Fill in the MemorySpaceMap if no memory space map change.
>> +      //
>> +      Descriptor = *MemorySpaceMap;
>> +      Link = mGcdMemorySpaceMap.ForwardLink;
>> +      while (Link != &mGcdMemorySpaceMap) {
>> +        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
>> +        BuildMemoryDescriptor (Descriptor, Entry);
>> +        Descriptor++;
>> +        Link = Link->ForwardLink;
>> +      }
>> +      //
>> +      // We're done; exit the loop with the lock held.
>> +      //
>> +      break;
>> +    }
>>
>> -  //
>> -  // Allocate the MemorySpaceMap
>> -  //
>> -  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof
>> (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>> -  if (*MemorySpaceMap == NULL) {
>> -    Status = EFI_OUT_OF_RESOURCES;
>> -    goto Done;
>> -  }
>> +    //
>> +    // Release the lock before memory allocation, because it might cause
>> +    // GCD lock conflict in one of calling path in AllocatPool().
>> +    //
>> +    CoreReleaseGcdMemoryLock ();
>> +
>> +    //
>> +    // Allocate memory to store the MemorySpaceMap. Note it might be already
>> +    // allocated if there's map descriptor change during memory allocation at
>> +    // last time.
>> +    //
>> +    if (*MemorySpaceMap != NULL) {
>> +      FreePool (*MemorySpaceMap);
>> +    }
>>
>> +    *MemorySpaceMap = AllocatePool (DescriptorCount *
>> +                                    sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>> +    if (*MemorySpaceMap == NULL) {
>> +      *NumberOfDescriptors = 0;
>> +      return EFI_OUT_OF_RESOURCES;
>> +    }
>> +
>> +    //
>> +    // Save the descriptor count got before for another round of check to make
>> +    // sure we won't miss any, since we have code running outside the GCD lock.
>> +    //
>> +    *NumberOfDescriptors = DescriptorCount;
>> +    //
>> +    // Re-acquire the lock, for the next iteration.
>> +    //
>> +    CoreAcquireGcdMemoryLock ();
>> +  }
>>     //
>> -  // Fill in the MemorySpaceMap
>> +  // We exited the loop with the lock held, release it.
>>     //
>> -  Descriptor = *MemorySpaceMap;
>> -  Link = mGcdMemorySpaceMap.ForwardLink;
>> -  while (Link != &mGcdMemorySpaceMap) {
>> -    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
>> -    BuildMemoryDescriptor (Descriptor, Entry);
>> -    Descriptor++;
>> -    Link = Link->ForwardLink;
>> -  }
>> -  Status = EFI_SUCCESS;
>> -
>> -Done:
>>     CoreReleaseGcdMemoryLock ();
>> -  return Status;
>> +
>> +  return EFI_SUCCESS;
>>   }
>>
>>
>> --
>> 2.16.2.windows.1
>>
>> _______________________________________________
>> edk2-devel mailing list
>> edk2-devel@lists.01.org
>> https://lists.01.org/mailman/listinfo/edk2-devel



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 6/6] MdeModulePkg/Core: add freed-memory guard feature
  2018-10-25  7:18 ` [PATCH v4 6/6] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
@ 2018-10-26  1:18   ` Zeng, Star
  2018-10-26  1:20     ` Wang, Jian J
  0 siblings, 1 reply; 15+ messages in thread
From: Zeng, Star @ 2018-10-26  1:18 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel
  Cc: Michael D Kinney, Ruiyu Ni, Jiewen Yao, Laszlo Ersek, star.zeng

On 2018/10/25 15:18, Jian J Wang wrote:
>> v4 changes:
>> a. replace hard-coded memory attributes with the value got from
>>     CoreGetMemorySpaceDescriptor()
>> b. remove the enclosure of CoreAcquireGcdMemoryLock() and
>>     CoreReleaseGcdMemoryLock() around CoreAddRange()
> 
> Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
> which is illegal access to memory which has been freed. The principle
> behind is similar to heap guard feature, that is we'll turn all pool

Remember to use "pool/page heap guard feature" instead of "heap guard 
feature" here. No need to send new patch series for it.

Thanks,
Star

> memory allocation to page allocation and mark them to be not-present
> once they are freed.
> 
> This also implies that, once a page is allocated and freed, it cannot
> be re-allocated. This will bring another issue, which is that there's
> risk that memory space will be used out. To address it, the memory
> service add logic to put part (at most 64 pages a time) of freed pages
> back into page pool, so that the memory service can still have memory
> to allocate, when all memory space have been allocated once. This is
> called memory promotion. The promoted pages are always from the eldest
> pages which haven been freed.
> 
> This feature brings another problem is that memory map descriptors will
> be increased enormously (200+ -> 2000+). One of change in this patch
> is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
> freed pages back into the memory map. Now the number can stay at around
> 510.
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409 +++++++++++++++++++++++++-
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
>   MdeModulePkg/Core/Dxe/Mem/Page.c              |  42 ++-
>   MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
>   MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
>   MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
>   6 files changed, 525 insertions(+), 34 deletions(-)
> 
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> index 663f969c0d..449a022658 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
>   GLOBAL_REMOVE_IF_UNREFERENCED UINTN mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
>                                       = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>   
> +//
> +// Used for promoting freed but not used pages.
> +//
> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS mLastPromotedPage = BASE_4GB;
> +
>   /**
>     Set corresponding bits in bitmap table to 1 according to the address.
>   
> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
>   
>     @return An integer containing the guarded memory bitmap.
>   **/
> -UINTN
> +UINT64
>   GetGuardedMemoryBits (
>     IN EFI_PHYSICAL_ADDRESS    Address,
>     IN UINTN                   NumberOfPages
> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
>   {
>     UINT64            *BitMap;
>     UINTN             Bits;
> -  UINTN             Result;
> +  UINT64            Result;
>     UINTN             Shift;
>     UINTN             BitsToUnitEnd;
>   
> @@ -660,15 +665,16 @@ IsPageTypeToGuard (
>   /**
>     Check to see if the heap guard is enabled for page and/or pool allocation.
>   
> +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> +
>     @return TRUE/FALSE.
>   **/
>   BOOLEAN
>   IsHeapGuardEnabled (
> -  VOID
> +  UINT8           GuardType
>     )
>   {
> -  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> -                              GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
> +  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages, GuardType);
>   }
>   
>   /**
> @@ -1203,6 +1209,380 @@ SetAllGuardPages (
>     }
>   }
>   
> +/**
> +  Find the address of top-most guarded free page.
> +
> +  @param[out]  Address    Start address of top-most guarded free page.
> +
> +  @return VOID.
> +**/
> +VOID
> +GetLastGuardedFreePageAddress (
> +  OUT EFI_PHYSICAL_ADDRESS      *Address
> +  )
> +{
> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> +  UINTN                   Level;
> +  UINT64                  Map;
> +  INTN                    Index;
> +
> +  ASSERT (mMapLevel >= 1);
> +
> +  BaseAddress = 0;
> +  Map = mGuardedMemoryMap;
> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> +       ++Level) {
> +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> +
> +    //
> +    // Find the non-NULL entry at largest index.
> +    //
> +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> +        Map = ((UINT64 *)(UINTN)Map)[Index];
> +        break;
> +      }
> +    }
> +  }
> +
> +  //
> +  // Find the non-zero MSB then get the page address.
> +  //
> +  while (Map != 0) {
> +    Map = RShiftU64 (Map, 1);
> +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> +  }
> +
> +  *Address = BaseAddress;
> +}
> +
> +/**
> +  Record freed pages.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +MarkFreedPages (
> +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> +  IN UINTN                    Pages
> +  )
> +{
> +  SetGuardedMemoryBits (BaseAddress, Pages);
> +}
> +
> +/**
> +  Record freed pages as well as mark them as not-present.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPages (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  )
> +{
> +  EFI_STATUS      Status;
> +
> +  //
> +  // Legacy memory lower than 1MB might be accessed with no allocation. Leave
> +  // them alone.
> +  //
> +  if (BaseAddress < BASE_1MB) {
> +    return;
> +  }
> +
> +  MarkFreedPages (BaseAddress, Pages);
> +  if (gCpu != NULL) {
> +    //
> +    // Set flag to make sure allocating memory without GUARD for page table
> +    // operation; otherwise infinite loops could be caused.
> +    //
> +    mOnGuarding = TRUE;
> +    //
> +    // Note: This might overwrite other attributes needed by other features,
> +    // such as NX memory protection.
> +    //
> +    Status = gCpu->SetMemoryAttributes (
> +                     gCpu,
> +                     BaseAddress,
> +                     EFI_PAGES_TO_SIZE (Pages),
> +                     EFI_MEMORY_RP
> +                     );
> +    //
> +    // Normally we should ASSERT the returned Status. But there might be memory
> +    // alloc/free involved in SetMemoryAttributes(), which might fail this
> +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> +    //
> +    if (EFI_ERROR (Status)) {
> +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%lu)\n", BaseAddress, (UINT64)Pages));
> +    }
> +    mOnGuarding = FALSE;
> +  }
> +}
> +
> +/**
> +  Record freed pages as well as mark them as not-present, if enabled.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPagesChecked (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  )
> +{
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> +    GuardFreedPages (BaseAddress, Pages);
> +  }
> +}
> +
> +/**
> +  Mark all pages freed before CPU Arch Protocol as not-present.
> +
> +**/
> +VOID
> +GuardAllFreedPages (
> +  VOID
> +  )
> +{
> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    TableEntry;
> +  UINT64    Address;
> +  UINT64    GuardPage;
> +  INTN      Level;
> +  UINTN     BitIndex;
> +  UINTN     GuardPageNumber;
> +
> +  if (mGuardedMemoryMap == 0 ||
> +      mMapLevel == 0 ||
> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> +    return;
> +  }
> +
> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> +
> +  SetMem (Tables, sizeof(Tables), 0);
> +  SetMem (Addresses, sizeof(Addresses), 0);
> +  SetMem (Indices, sizeof(Indices), 0);
> +
> +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +  Tables[Level]   = mGuardedMemoryMap;
> +  Address         = 0;
> +  GuardPage       = (UINT64)-1;
> +  GuardPageNumber = 0;
> +
> +  while (TRUE) {
> +    if (Indices[Level] > Entries[Level]) {
> +      Tables[Level] = 0;
> +      Level        -= 1;
> +    } else {
> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> +      Address     = Addresses[Level];
> +
> +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> +        Level            += 1;
> +        Tables[Level]     = TableEntry;
> +        Addresses[Level]  = Address;
> +        Indices[Level]    = 0;
> +
> +        continue;
> +      } else {
> +        BitIndex = 1;
> +        while (BitIndex != 0) {
> +          if ((TableEntry & BitIndex) != 0) {
> +            if (GuardPage == (UINT64)-1) {
> +              GuardPage = Address;
> +            }
> +            ++GuardPageNumber;
> +          } else if (GuardPageNumber > 0) {
> +            GuardFreedPages (GuardPage, GuardPageNumber);
> +            GuardPageNumber = 0;
> +            GuardPage       = (UINT64)-1;
> +          }
> +
> +          if (TableEntry == 0) {
> +            break;
> +          }
> +
> +          Address += EFI_PAGES_TO_SIZE (1);
> +          BitIndex = LShiftU64 (BitIndex, 1);
> +        }
> +      }
> +    }
> +
> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> +      break;
> +    }
> +
> +    Indices[Level] += 1;
> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> +    Addresses[Level] = Address | LShiftU64 (Indices[Level], Shifts[Level]);
> +
> +  }
> +
> +  //
> +  // Update the maximum address of freed page which can be used for memory
> +  // promotion upon out-of-memory-space.
> +  //
> +  GetLastGuardedFreePageAddress (&Address);
> +  if (Address != 0) {
> +    mLastPromotedPage = Address;
> +  }
> +}
> +
> +/**
> +  This function checks to see if the given memory map descriptor in a memory map
> +  can be merged with any guarded free pages.
> +
> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> +  @param  MaxAddress        Maximum address to stop the merge.
> +
> +  @return VOID
> +
> +**/
> +VOID
> +MergeGuardPages (
> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> +  )
> +{
> +  EFI_PHYSICAL_ADDRESS        EndAddress;
> +  UINT64                      Bitmap;
> +  INTN                        Pages;
> +
> +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> +    return;
> +  }
> +
> +  Bitmap = 0;
> +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry->PhysicalStart);
> +  Pages -= MemoryMapEntry->NumberOfPages;
> +  while (Pages > 0) {
> +    if (Bitmap == 0) {
> +      EndAddress = MemoryMapEntry->PhysicalStart +
> +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> +      Bitmap = GetGuardedMemoryBits (EndAddress, GUARDED_HEAP_MAP_ENTRY_BITS);
> +    }
> +
> +    if ((Bitmap & 1) == 0) {
> +      break;
> +    }
> +
> +    Pages--;
> +    MemoryMapEntry->NumberOfPages++;
> +    Bitmap = RShiftU64 (Bitmap, 1);
> +  }
> +}
> +
> +/**
> +  Put part (at most 64 pages a time) guarded free pages back to free page pool.
> +
> +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
> +  makes use of 'Used then throw away' way to detect any illegal access to freed
> +  memory. The thrown-away memory will be marked as not-present so that any access
> +  to those memory (after free) will be caught by page-fault exception.
> +
> +  The problem is that this will consume lots of memory space. Once no memory
> +  left in pool to allocate, we have to restore part of the freed pages to their
> +  normal function. Otherwise the whole system will stop functioning.
> +
> +  @param  StartAddress    Start address of promoted memory.
> +  @param  EndAddress      End address of promoted memory.
> +
> +  @return TRUE    Succeeded to promote memory.
> +  @return FALSE   No free memory found.
> +
> +**/
> +BOOLEAN
> +PromoteGuardedFreePages (
> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> +  )
> +{
> +  EFI_STATUS              Status;
> +  UINTN                   AvailablePages;
> +  UINT64                  Bitmap;
> +  EFI_PHYSICAL_ADDRESS    Start;
> +
> +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> +    return FALSE;
> +  }
> +
> +  //
> +  // Similar to memory allocation service, always search the freed pages in
> +  // descending direction.
> +  //
> +  Start           = mLastPromotedPage;
> +  AvailablePages  = 0;
> +  while (AvailablePages == 0) {
> +    Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> +    //
> +    // If the address wraps around, try the really freed pages at top.
> +    //
> +    if (Start > mLastPromotedPage) {
> +      GetLastGuardedFreePageAddress (&Start);
> +      ASSERT (Start != 0);
> +      Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> +    }
> +
> +    Bitmap = GetGuardedMemoryBits (Start, GUARDED_HEAP_MAP_ENTRY_BITS);
> +    while (Bitmap > 0) {
> +      if ((Bitmap & 1) != 0) {
> +        ++AvailablePages;
> +      } else if (AvailablePages == 0) {
> +        Start += EFI_PAGES_TO_SIZE (1);
> +      } else {
> +        break;
> +      }
> +
> +      Bitmap = RShiftU64 (Bitmap, 1);
> +    }
> +  }
> +
> +  if (AvailablePages) {
> +    DEBUG ((DEBUG_INFO, "Promoted pages: %lX (%lx)\r\n", Start, (UINT64)AvailablePages));
> +    ClearGuardedMemoryBits (Start, AvailablePages);
> +
> +    if (gCpu != NULL) {
> +      //
> +      // Set flag to make sure allocating memory without GUARD for page table
> +      // operation; otherwise infinite loops could be caused.
> +      //
> +      mOnGuarding = TRUE;
> +      Status = gCpu->SetMemoryAttributes (gCpu, Start, EFI_PAGES_TO_SIZE(AvailablePages), 0);
> +      ASSERT_EFI_ERROR (Status);
> +      mOnGuarding = FALSE;
> +    }
> +
> +    mLastPromotedPage = Start;
> +    *StartAddress     = Start;
> +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> +    return TRUE;
> +  }
> +
> +  return FALSE;
> +}
> +
>   /**
>     Notify function used to set all Guard pages before CPU Arch Protocol installed.
>   **/
> @@ -1212,7 +1592,20 @@ HeapGuardCpuArchProtocolNotify (
>     )
>   {
>     ASSERT (gCpu != NULL);
> -  SetAllGuardPages ();
> +
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL) &&
> +      IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> +    DEBUG ((DEBUG_ERROR, "Heap guard and freed memory guard cannot be enabled at the same time.\n"));
> +    CpuDeadLoop ();
> +  }
> +
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> +    SetAllGuardPages ();
> +  }
> +
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> +    GuardAllFreedPages ();
> +  }
>   }
>   
>   /**
> @@ -1264,6 +1657,10 @@ DumpGuardedMemoryBitmap (
>     CHAR8     *Ruler1;
>     CHAR8     *Ruler2;
>   
> +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_ALL)) {
> +    return;
> +  }
> +
>     if (mGuardedMemoryMap == 0 ||
>         mMapLevel == 0 ||
>         mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> index 8c34692439..55a91ec098 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> @@ -1,7 +1,7 @@
>   /** @file
>     Data type, macros and function prototypes of heap guard feature.
>   
> -Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017-2018, Intel Corporation. All rights reserved.<BR>
>   This program and the accompanying materials
>   are licensed and made available under the terms and conditions of the BSD License
>   which accompanies this distribution.  The full text of the license may be found at
> @@ -160,6 +160,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>   //
>   #define GUARD_HEAP_TYPE_PAGE        BIT0
>   #define GUARD_HEAP_TYPE_POOL        BIT1
> +#define GUARD_HEAP_TYPE_FREED       BIT4
> +#define GUARD_HEAP_TYPE_ALL         \
> +        (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_FREED)
>   
>   //
>   // Debug message level
> @@ -392,11 +395,13 @@ AdjustPoolHeadF (
>   /**
>     Check to see if the heap guard is enabled for page and/or pool allocation.
>   
> +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> +
>     @return TRUE/FALSE.
>   **/
>   BOOLEAN
>   IsHeapGuardEnabled (
> -  VOID
> +  UINT8           GuardType
>     );
>   
>   /**
> @@ -407,6 +412,62 @@ HeapGuardCpuArchProtocolNotify (
>     VOID
>     );
>   
> +/**
> +  This function checks to see if the given memory map descriptor in a memory map
> +  can be merged with any guarded free pages.
> +
> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> +  @param  MaxAddress        Maximum address to stop the merge.
> +
> +  @return VOID
> +
> +**/
> +VOID
> +MergeGuardPages (
> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> +  );
> +
> +/**
> +  Record freed pages as well as mark them as not-present, if enabled.
> +
> +  @param[in]  BaseAddress   Base address of just freed pages.
> +  @param[in]  Pages         Number of freed pages.
> +
> +  @return VOID.
> +**/
> +VOID
> +EFIAPI
> +GuardFreedPagesChecked (
> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> +  IN  UINTN                   Pages
> +  );
> +
> +/**
> +  Put part (at most 64 pages a time) guarded free pages back to free page pool.
> +
> +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue, which
> +  makes use of 'Used then throw away' way to detect any illegal access to freed
> +  memory. The thrown-away memory will be marked as not-present so that any access
> +  to those memory (after free) will be caught by page-fault exception.
> +
> +  The problem is that this will consume lots of memory space. Once no memory
> +  left in pool to allocate, we have to restore part of the freed pages to their
> +  normal function. Otherwise the whole system will stop functioning.
> +
> +  @param  StartAddress    Start address of promoted memory.
> +  @param  EndAddress      End address of promoted memory.
> +
> +  @return TRUE    Succeeded to promote memory.
> +  @return FALSE   No free memory found.
> +
> +**/
> +BOOLEAN
> +PromoteGuardedFreePages (
> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> +  );
> +
>   extern BOOLEAN mOnGuarding;
>   
>   #endif
> diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c
> index 3b4cc08e7c..961c5b8335 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> @@ -401,9 +401,12 @@ PromoteMemoryResource (
>     VOID
>     )
>   {
> -  LIST_ENTRY         *Link;
> -  EFI_GCD_MAP_ENTRY  *Entry;
> -  BOOLEAN            Promoted;
> +  LIST_ENTRY                        *Link;
> +  EFI_GCD_MAP_ENTRY                 *Entry;
> +  BOOLEAN                           Promoted;
> +  EFI_PHYSICAL_ADDRESS              StartAddress;
> +  EFI_PHYSICAL_ADDRESS              EndAddress;
> +  EFI_GCD_MEMORY_SPACE_DESCRIPTOR   Descriptor;
>   
>     DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
>   
> @@ -451,6 +454,24 @@ PromoteMemoryResource (
>   
>     CoreReleaseGcdMemoryLock ();
>   
> +  if (!Promoted) {
> +    //
> +    // If freed-memory guard is enabled, we could promote pages from
> +    // guarded free pages.
> +    //
> +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> +    if (Promoted) {
> +      CoreGetMemorySpaceDescriptor (StartAddress, &Descriptor);
> +      CoreAddRange (
> +        EfiConventionalMemory,
> +        StartAddress,
> +        EndAddress,
> +        Descriptor.Capabilities & ~(EFI_MEMORY_PRESENT | EFI_MEMORY_INITIALIZED |
> +                                    EFI_MEMORY_TESTED | EFI_MEMORY_RUNTIME)
> +        );
> +    }
> +  }
> +
>     return Promoted;
>   }
>   /**
> @@ -896,9 +917,15 @@ CoreConvertPagesEx (
>       }
>   
>       //
> -    // Add our new range in
> +    // Add our new range in. Don't do this for freed pages if freed-memory
> +    // guard is enabled.
>       //
> -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> +    if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> +        !ChangingType ||
> +        MemType != EfiConventionalMemory) {
> +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> +    }
> +
>       if (ChangingType && (MemType == EfiConventionalMemory)) {
>         //
>         // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
> @@ -1514,6 +1541,7 @@ CoreFreePages (
>   
>     Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
>     if (!EFI_ERROR (Status)) {
> +    GuardFreedPagesChecked (Memory, NumberOfPages);
>       CoreUpdateProfile (
>         (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
>         MemoryProfileActionFreePages,
> @@ -1908,9 +1936,7 @@ Done:
>     *MemoryMapSize = BufferSize;
>   
>     DEBUG_CODE (
> -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> -      DumpGuardedMemoryBitmap ();
> -    }
> +    DumpGuardedMemoryBitmap ();
>     );
>   
>     return Status;
> diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> index 1ff2061f7f..b9182ea807 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> @@ -1,7 +1,7 @@
>   /** @file
>     UEFI Memory pool management functions.
>   
> -Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
>   This program and the accompanying materials
>   are licensed and made available under the terms and conditions of the BSD License
>   which accompanies this distribution.  The full text of the license may be found at
> @@ -26,7 +26,8 @@ typedef struct {
>   } POOL_FREE;
>   
>   
> -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
>   typedef struct {
>     UINT32          Signature;
>     UINT32          Reserved;
> @@ -367,6 +368,7 @@ CoreAllocatePoolI (
>     UINTN       NoPages;
>     UINTN       Granularity;
>     BOOLEAN     HasPoolTail;
> +  BOOLEAN     PageAsPool;
>   
>     ASSERT_LOCKED (&mPoolMemoryLock);
>   
> @@ -386,6 +388,7 @@ CoreAllocatePoolI (
>   
>     HasPoolTail  = !(NeedGuard &&
>                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +  PageAsPool = (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) && !mOnGuarding);
>   
>     //
>     // Adjusting the Size to be of proper alignment so that
> @@ -406,7 +409,7 @@ CoreAllocatePoolI (
>     // If allocation is over max size, just allocate pages for the request
>     // (slow)
>     //
> -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
>       if (!HasPoolTail) {
>         Size -= sizeof (POOL_TAIL);
>       }
> @@ -498,7 +501,7 @@ Done:
>       //
>       // If we have a pool buffer, fill in the header & tail info
>       //
> -    Head->Signature = POOL_HEAD_SIGNATURE;
> +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE : POOL_HEAD_SIGNATURE;
>       Head->Size      = Size;
>       Head->Type      = (EFI_MEMORY_TYPE) PoolType;
>       Buffer          = Head->Data;
> @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
>     CoreFreePoolPages (Memory, NoPages);
>     CoreReleaseMemoryLock ();
>   
> +  GuardFreedPagesChecked (Memory, NoPages);
>     ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
>       (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
>   }
> @@ -685,15 +689,19 @@ CoreFreePoolI (
>     UINTN       Granularity;
>     BOOLEAN     IsGuarded;
>     BOOLEAN     HasPoolTail;
> +  BOOLEAN     PageAsPool;
>   
>     ASSERT(Buffer != NULL);
>     //
>     // Get the head & tail of the pool entry
>     //
> -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
>     ASSERT(Head != NULL);
>   
> -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>       return EFI_INVALID_PARAMETER;
>     }
>   
> @@ -701,6 +709,7 @@ CoreFreePoolI (
>                   IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
>     HasPoolTail = !(IsGuarded &&
>                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>   
>     if (HasPoolTail) {
>       Tail = HEAD_TO_TAIL (Head);
> @@ -757,7 +766,7 @@ CoreFreePoolI (
>     //
>     // If it's not on the list, it must be pool pages
>     //
> -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
>   
>       //
>       // Return the memory pages back to free memory
> diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> index fa8f8fe91a..6298b67db1 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> @@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
>     // Don't overwrite Guard pages, which should be the first and/or last page,
>     // if any.
>     //
> -  if (IsHeapGuardEnabled ()) {
> +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
>       if (IsGuardPage (Memory))  {
>         Memory += EFI_PAGE_SIZE;
>         Length -= EFI_PAGE_SIZE;
> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> index 05eb4f422b..04cfb2dab2 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> @@ -1,7 +1,7 @@
>   /** @file
>     UEFI PropertiesTable support
>   
> -Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR>
>   This program and the accompanying materials
>   are licensed and made available under the terms and conditions of the BSD License
>   which accompanies this distribution.  The full text of the license may be found at
> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>   #include <Guid/PropertiesTable.h>
>   
>   #include "DxeMain.h"
> +#include "HeapGuard.h"
>   
>   #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
>     ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> @@ -205,16 +206,13 @@ MergeMemoryMap (
>       NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry, DescriptorSize);
>   
>       do {
> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry->NumberOfPages));
> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry->PhysicalStart);
> +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry->NumberOfPages));
>         if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        if (NewMemoryMapEntry != MemoryMapEntry) {
> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
> -        }
> -
> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) == NextMemoryMapEntry->PhysicalStart)) {
> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry->NumberOfPages;
>           NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (NextMemoryMapEntry, DescriptorSize);
>           continue;
>         } else {
> 



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock
  2018-10-26  1:17     ` Zeng, Star
@ 2018-10-26  1:19       ` Wang, Jian J
  0 siblings, 0 replies; 15+ messages in thread
From: Wang, Jian J @ 2018-10-26  1:19 UTC (permalink / raw)
  To: Zeng, Star, edk2-devel, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek

Sorry missing that one. I'll update it. Thanks for the comments.

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Friday, October 26, 2018 9:18 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel <edk2-devel-
> bounces@lists.01.org>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire
> GCD memory lock
> 
> On 2018/10/25 15:20, Wang, Jian J wrote:
> > Sorry, forgot to update commit message:
> >
> >> This issue is hidden in current code but exposed by introduction
> >> of freed-memory guard feature due to the fact that the feature
> >> will turn all pool allocation to page allocation.
> >>
> >> The solution is moving the memory allocation in CoreGetMemorySpaceMap()
> >> to be out of the GCD memory map lock.
> >
> > Regards,
> > Jian
> >
> >
> >> -----Original Message-----
> >> From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org]
> >> Sent: Thursday, October 25, 2018 3:18 PM
> >> To: edk2-devel@lists.01.org
> >> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> >> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Zeng, Star
> >> <star.zeng@intel.com>; Laszlo Ersek <lersek@redhat.com>
> >> Subject: [edk2] [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD
> >> memory lock
> >>
> >>> v4 changes:
> >>> a. add more comments from Laszlo
> >>
> >> This issue is hidden in current code but exposed by introduction
> >> of freed-memory guard feature due to the fact that the feature
> >> will turn all pool allocation to page allocation.
> >>
> >> The solution is move the memory allocation in CoreGetMemorySpaceMap()
> >> and CoreGetIoSpaceMap() to be out of the GCD memory map lock.
> >>
> >>     CoreDumpGcdMemorySpaceMap()
> >> => CoreGetMemorySpaceMap()
> >> => CoreAcquireGcdMemoryLock () *
> >>     AllocatePool()
> >> => InternalAllocatePool()
> >> => CoreAllocatePool()
> >> => CoreAllocatePoolI()
> >> => CoreAllocatePoolPagesI()
> >> => CoreAllocatePoolPages()
> >> => FindFreePages()
> >> => PromoteMemoryResource()
> >> => CoreAcquireGcdMemoryLock()  **
> >>
> >> Cc: Star Zeng <star.zeng@intel.com>
> >> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> >> Cc: Jiewen Yao <jiewen.yao@intel.com>
> >> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> >> Cc: Laszlo Ersek <lersek@redhat.com>
> >> Contributed-under: TianoCore Contribution Agreement 1.1
> >> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> >> ---
> >>   MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 87
> +++++++++++++++++++++++++++++-
> >> -----------
> >>   1 file changed, 62 insertions(+), 25 deletions(-)
> >>
> >> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >> index d9c65a8045..f99d6bb933 100644
> >> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >> @@ -1691,10 +1691,10 @@ CoreGetMemorySpaceMap (
> >>     OUT EFI_GCD_MEMORY_SPACE_DESCRIPTOR  **MemorySpaceMap
> >>     )
> >>   {
> >> -  EFI_STATUS                       Status;
> >>     LIST_ENTRY                       *Link;
> >>     EFI_GCD_MAP_ENTRY                *Entry;
> >>     EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *Descriptor;
> >> +  UINTN                            DescriptorCount;
> >>
> >>     //
> >>     // Make sure parameters are valid
> >> @@ -1706,38 +1706,75 @@ CoreGetMemorySpaceMap (
> >>       return EFI_INVALID_PARAMETER;
> >>     }
> >>
> >> -  CoreAcquireGcdMemoryLock ();
> >> +  *NumberOfDescriptors  = 0;
> >> +  *MemorySpaceMap       = NULL;
> >>
> >>     //
> >> -  // Count the number of descriptors
> >> +  // Take the lock, for entering the loop with the lock held.
> >>     //
> >> -  *NumberOfDescriptors = CoreCountGcdMapEntry
> (&mGcdMemorySpaceMap);
> >> +  CoreAcquireGcdMemoryLock ();
> >> +  while (TRUE) {
> >> +    //
> >> +    // Count the number of descriptors. It might be done more than once
> 
> How about using "Count the descriptors" here? No need to send new patch
> series for it.
> 
> >> because
> >> +    // there's code which has to be running outside the GCD lock.
> 
> I have given comment for this line at V3 series.
> How about using "AllocatePool() calling code below" instead of "there's
> code" to be more specific?
> 
> Thanks,
> Star
> 
> >> +    //
> >> +    DescriptorCount = CoreCountGcdMapEntry (&mGcdMemorySpaceMap);
> >> +    if (DescriptorCount == *NumberOfDescriptors) {
> >> +      //
> >> +      // Fill in the MemorySpaceMap if no memory space map change.
> >> +      //
> >> +      Descriptor = *MemorySpaceMap;
> >> +      Link = mGcdMemorySpaceMap.ForwardLink;
> >> +      while (Link != &mGcdMemorySpaceMap) {
> >> +        Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link,
> EFI_GCD_MAP_SIGNATURE);
> >> +        BuildMemoryDescriptor (Descriptor, Entry);
> >> +        Descriptor++;
> >> +        Link = Link->ForwardLink;
> >> +      }
> >> +      //
> >> +      // We're done; exit the loop with the lock held.
> >> +      //
> >> +      break;
> >> +    }
> >>
> >> -  //
> >> -  // Allocate the MemorySpaceMap
> >> -  //
> >> -  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof
> >> (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
> >> -  if (*MemorySpaceMap == NULL) {
> >> -    Status = EFI_OUT_OF_RESOURCES;
> >> -    goto Done;
> >> -  }
> >> +    //
> >> +    // Release the lock before memory allocation, because it might cause
> >> +    // GCD lock conflict in one of calling path in AllocatPool().
> >> +    //
> >> +    CoreReleaseGcdMemoryLock ();
> >> +
> >> +    //
> >> +    // Allocate memory to store the MemorySpaceMap. Note it might be
> already
> >> +    // allocated if there's map descriptor change during memory allocation at
> >> +    // last time.
> >> +    //
> >> +    if (*MemorySpaceMap != NULL) {
> >> +      FreePool (*MemorySpaceMap);
> >> +    }
> >>
> >> +    *MemorySpaceMap = AllocatePool (DescriptorCount *
> >> +                                    sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
> >> +    if (*MemorySpaceMap == NULL) {
> >> +      *NumberOfDescriptors = 0;
> >> +      return EFI_OUT_OF_RESOURCES;
> >> +    }
> >> +
> >> +    //
> >> +    // Save the descriptor count got before for another round of check to
> make
> >> +    // sure we won't miss any, since we have code running outside the GCD
> lock.
> >> +    //
> >> +    *NumberOfDescriptors = DescriptorCount;
> >> +    //
> >> +    // Re-acquire the lock, for the next iteration.
> >> +    //
> >> +    CoreAcquireGcdMemoryLock ();
> >> +  }
> >>     //
> >> -  // Fill in the MemorySpaceMap
> >> +  // We exited the loop with the lock held, release it.
> >>     //
> >> -  Descriptor = *MemorySpaceMap;
> >> -  Link = mGcdMemorySpaceMap.ForwardLink;
> >> -  while (Link != &mGcdMemorySpaceMap) {
> >> -    Entry = CR (Link, EFI_GCD_MAP_ENTRY, Link, EFI_GCD_MAP_SIGNATURE);
> >> -    BuildMemoryDescriptor (Descriptor, Entry);
> >> -    Descriptor++;
> >> -    Link = Link->ForwardLink;
> >> -  }
> >> -  Status = EFI_SUCCESS;
> >> -
> >> -Done:
> >>     CoreReleaseGcdMemoryLock ();
> >> -  return Status;
> >> +
> >> +  return EFI_SUCCESS;
> >>   }
> >>
> >>
> >> --
> >> 2.16.2.windows.1
> >>
> >> _______________________________________________
> >> edk2-devel mailing list
> >> edk2-devel@lists.01.org
> >> https://lists.01.org/mailman/listinfo/edk2-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 6/6] MdeModulePkg/Core: add freed-memory guard feature
  2018-10-26  1:18   ` Zeng, Star
@ 2018-10-26  1:20     ` Wang, Jian J
  0 siblings, 0 replies; 15+ messages in thread
From: Wang, Jian J @ 2018-10-26  1:20 UTC (permalink / raw)
  To: Zeng, Star, edk2-devel@lists.01.org
  Cc: Kinney, Michael D, Ni, Ruiyu, Yao, Jiewen, Laszlo Ersek

Sure. Thanks.

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Friday, October 26, 2018 9:19 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH v4 6/6] MdeModulePkg/Core: add freed-memory
> guard feature
> 
> On 2018/10/25 15:18, Jian J Wang wrote:
> >> v4 changes:
> >> a. replace hard-coded memory attributes with the value got from
> >>     CoreGetMemorySpaceDescriptor()
> >> b. remove the enclosure of CoreAcquireGcdMemoryLock() and
> >>     CoreReleaseGcdMemoryLock() around CoreAddRange()
> >
> > Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
> > which is illegal access to memory which has been freed. The principle
> > behind is similar to heap guard feature, that is we'll turn all pool
> 
> Remember to use "pool/page heap guard feature" instead of "heap guard
> feature" here. No need to send new patch series for it.
> 
> Thanks,
> Star
> 
> > memory allocation to page allocation and mark them to be not-present
> > once they are freed.
> >
> > This also implies that, once a page is allocated and freed, it cannot
> > be re-allocated. This will bring another issue, which is that there's
> > risk that memory space will be used out. To address it, the memory
> > service add logic to put part (at most 64 pages a time) of freed pages
> > back into page pool, so that the memory service can still have memory
> > to allocate, when all memory space have been allocated once. This is
> > called memory promotion. The promoted pages are always from the eldest
> > pages which haven been freed.
> >
> > This feature brings another problem is that memory map descriptors will
> > be increased enormously (200+ -> 2000+). One of change in this patch
> > is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
> > freed pages back into the memory map. Now the number can stay at around
> > 510.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 409
> +++++++++++++++++++++++++-
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |  65 +++-
> >   MdeModulePkg/Core/Dxe/Mem/Page.c              |  42 ++-
> >   MdeModulePkg/Core/Dxe/Mem/Pool.c              |  23 +-
> >   MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   2 +-
> >   MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c  |  18 +-
> >   6 files changed, 525 insertions(+), 34 deletions(-)
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > index 663f969c0d..449a022658 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >   GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >                                       = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> >
> > +//
> > +// Used for promoting freed but not used pages.
> > +//
> > +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
> mLastPromotedPage = BASE_4GB;
> > +
> >   /**
> >     Set corresponding bits in bitmap table to 1 according to the address.
> >
> > @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
> >
> >     @return An integer containing the guarded memory bitmap.
> >   **/
> > -UINTN
> > +UINT64
> >   GetGuardedMemoryBits (
> >     IN EFI_PHYSICAL_ADDRESS    Address,
> >     IN UINTN                   NumberOfPages
> > @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
> >   {
> >     UINT64            *BitMap;
> >     UINTN             Bits;
> > -  UINTN             Result;
> > +  UINT64            Result;
> >     UINTN             Shift;
> >     UINTN             BitsToUnitEnd;
> >
> > @@ -660,15 +665,16 @@ IsPageTypeToGuard (
> >   /**
> >     Check to see if the heap guard is enabled for page and/or pool allocation.
> >
> > +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> > +
> >     @return TRUE/FALSE.
> >   **/
> >   BOOLEAN
> >   IsHeapGuardEnabled (
> > -  VOID
> > +  UINT8           GuardType
> >     )
> >   {
> > -  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> > -                              GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
> > +  return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> GuardType);
> >   }
> >
> >   /**
> > @@ -1203,6 +1209,380 @@ SetAllGuardPages (
> >     }
> >   }
> >
> > +/**
> > +  Find the address of top-most guarded free page.
> > +
> > +  @param[out]  Address    Start address of top-most guarded free page.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +GetLastGuardedFreePageAddress (
> > +  OUT EFI_PHYSICAL_ADDRESS      *Address
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> > +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> > +  UINTN                   Level;
> > +  UINT64                  Map;
> > +  INTN                    Index;
> > +
> > +  ASSERT (mMapLevel >= 1);
> > +
> > +  BaseAddress = 0;
> > +  Map = mGuardedMemoryMap;
> > +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> > +       ++Level) {
> > +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> > +
> > +    //
> > +    // Find the non-NULL entry at largest index.
> > +    //
> > +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> > +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> > +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> > +        Map = ((UINT64 *)(UINTN)Map)[Index];
> > +        break;
> > +      }
> > +    }
> > +  }
> > +
> > +  //
> > +  // Find the non-zero MSB then get the page address.
> > +  //
> > +  while (Map != 0) {
> > +    Map = RShiftU64 (Map, 1);
> > +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> > +  }
> > +
> > +  *Address = BaseAddress;
> > +}
> > +
> > +/**
> > +  Record freed pages.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +MarkFreedPages (
> > +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> > +  IN UINTN                    Pages
> > +  )
> > +{
> > +  SetGuardedMemoryBits (BaseAddress, Pages);
> > +}
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  )
> > +{
> > +  EFI_STATUS      Status;
> > +
> > +  //
> > +  // Legacy memory lower than 1MB might be accessed with no allocation.
> Leave
> > +  // them alone.
> > +  //
> > +  if (BaseAddress < BASE_1MB) {
> > +    return;
> > +  }
> > +
> > +  MarkFreedPages (BaseAddress, Pages);
> > +  if (gCpu != NULL) {
> > +    //
> > +    // Set flag to make sure allocating memory without GUARD for page table
> > +    // operation; otherwise infinite loops could be caused.
> > +    //
> > +    mOnGuarding = TRUE;
> > +    //
> > +    // Note: This might overwrite other attributes needed by other features,
> > +    // such as NX memory protection.
> > +    //
> > +    Status = gCpu->SetMemoryAttributes (
> > +                     gCpu,
> > +                     BaseAddress,
> > +                     EFI_PAGES_TO_SIZE (Pages),
> > +                     EFI_MEMORY_RP
> > +                     );
> > +    //
> > +    // Normally we should ASSERT the returned Status. But there might be
> memory
> > +    // alloc/free involved in SetMemoryAttributes(), which might fail this
> > +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> > +    //
> > +    if (EFI_ERROR (Status)) {
> > +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%lu)\n",
> BaseAddress, (UINT64)Pages));
> > +    }
> > +    mOnGuarding = FALSE;
> > +  }
> > +}
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if enabled.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPagesChecked (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  )
> > +{
> > +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    GuardFreedPages (BaseAddress, Pages);
> > +  }
> > +}
> > +
> > +/**
> > +  Mark all pages freed before CPU Arch Protocol as not-present.
> > +
> > +**/
> > +VOID
> > +GuardAllFreedPages (
> > +  VOID
> > +  )
> > +{
> > +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    TableEntry;
> > +  UINT64    Address;
> > +  UINT64    GuardPage;
> > +  INTN      Level;
> > +  UINTN     BitIndex;
> > +  UINTN     GuardPageNumber;
> > +
> > +  if (mGuardedMemoryMap == 0 ||
> > +      mMapLevel == 0 ||
> > +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > +    return;
> > +  }
> > +
> > +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> > +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> > +
> > +  SetMem (Tables, sizeof(Tables), 0);
> > +  SetMem (Addresses, sizeof(Addresses), 0);
> > +  SetMem (Indices, sizeof(Indices), 0);
> > +
> > +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +  Tables[Level]   = mGuardedMemoryMap;
> > +  Address         = 0;
> > +  GuardPage       = (UINT64)-1;
> > +  GuardPageNumber = 0;
> > +
> > +  while (TRUE) {
> > +    if (Indices[Level] > Entries[Level]) {
> > +      Tables[Level] = 0;
> > +      Level        -= 1;
> > +    } else {
> > +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> > +      Address     = Addresses[Level];
> > +
> > +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +        Level            += 1;
> > +        Tables[Level]     = TableEntry;
> > +        Addresses[Level]  = Address;
> > +        Indices[Level]    = 0;
> > +
> > +        continue;
> > +      } else {
> > +        BitIndex = 1;
> > +        while (BitIndex != 0) {
> > +          if ((TableEntry & BitIndex) != 0) {
> > +            if (GuardPage == (UINT64)-1) {
> > +              GuardPage = Address;
> > +            }
> > +            ++GuardPageNumber;
> > +          } else if (GuardPageNumber > 0) {
> > +            GuardFreedPages (GuardPage, GuardPageNumber);
> > +            GuardPageNumber = 0;
> > +            GuardPage       = (UINT64)-1;
> > +          }
> > +
> > +          if (TableEntry == 0) {
> > +            break;
> > +          }
> > +
> > +          Address += EFI_PAGES_TO_SIZE (1);
> > +          BitIndex = LShiftU64 (BitIndex, 1);
> > +        }
> > +      }
> > +    }
> > +
> > +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > +      break;
> > +    }
> > +
> > +    Indices[Level] += 1;
> > +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > +    Addresses[Level] = Address | LShiftU64 (Indices[Level], Shifts[Level]);
> > +
> > +  }
> > +
> > +  //
> > +  // Update the maximum address of freed page which can be used for
> memory
> > +  // promotion upon out-of-memory-space.
> > +  //
> > +  GetLastGuardedFreePageAddress (&Address);
> > +  if (Address != 0) {
> > +    mLastPromotedPage = Address;
> > +  }
> > +}
> > +
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS        EndAddress;
> > +  UINT64                      Bitmap;
> > +  INTN                        Pages;
> > +
> > +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> > +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> > +    return;
> > +  }
> > +
> > +  Bitmap = 0;
> > +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
> >PhysicalStart);
> > +  Pages -= MemoryMapEntry->NumberOfPages;
> > +  while (Pages > 0) {
> > +    if (Bitmap == 0) {
> > +      EndAddress = MemoryMapEntry->PhysicalStart +
> > +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> > +      Bitmap = GetGuardedMemoryBits (EndAddress,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    if ((Bitmap & 1) == 0) {
> > +      break;
> > +    }
> > +
> > +    Pages--;
> > +    MemoryMapEntry->NumberOfPages++;
> > +    Bitmap = RShiftU64 (Bitmap, 1);
> > +  }
> > +}
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue,
> which
> > +  makes use of 'Used then throw away' way to detect any illegal access to
> freed
> > +  memory. The thrown-away memory will be marked as not-present so that
> any access
> > +  to those memory (after free) will be caught by page-fault exception.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  )
> > +{
> > +  EFI_STATUS              Status;
> > +  UINTN                   AvailablePages;
> > +  UINT64                  Bitmap;
> > +  EFI_PHYSICAL_ADDRESS    Start;
> > +
> > +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    return FALSE;
> > +  }
> > +
> > +  //
> > +  // Similar to memory allocation service, always search the freed pages in
> > +  // descending direction.
> > +  //
> > +  Start           = mLastPromotedPage;
> > +  AvailablePages  = 0;
> > +  while (AvailablePages == 0) {
> > +    Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    //
> > +    // If the address wraps around, try the really freed pages at top.
> > +    //
> > +    if (Start > mLastPromotedPage) {
> > +      GetLastGuardedFreePageAddress (&Start);
> > +      ASSERT (Start != 0);
> > +      Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    Bitmap = GetGuardedMemoryBits (Start,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    while (Bitmap > 0) {
> > +      if ((Bitmap & 1) != 0) {
> > +        ++AvailablePages;
> > +      } else if (AvailablePages == 0) {
> > +        Start += EFI_PAGES_TO_SIZE (1);
> > +      } else {
> > +        break;
> > +      }
> > +
> > +      Bitmap = RShiftU64 (Bitmap, 1);
> > +    }
> > +  }
> > +
> > +  if (AvailablePages) {
> > +    DEBUG ((DEBUG_INFO, "Promoted pages: %lX (%lx)\r\n", Start,
> (UINT64)AvailablePages));
> > +    ClearGuardedMemoryBits (Start, AvailablePages);
> > +
> > +    if (gCpu != NULL) {
> > +      //
> > +      // Set flag to make sure allocating memory without GUARD for page table
> > +      // operation; otherwise infinite loops could be caused.
> > +      //
> > +      mOnGuarding = TRUE;
> > +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
> EFI_PAGES_TO_SIZE(AvailablePages), 0);
> > +      ASSERT_EFI_ERROR (Status);
> > +      mOnGuarding = FALSE;
> > +    }
> > +
> > +    mLastPromotedPage = Start;
> > +    *StartAddress     = Start;
> > +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> > +    return TRUE;
> > +  }
> > +
> > +  return FALSE;
> > +}
> > +
> >   /**
> >     Notify function used to set all Guard pages before CPU Arch Protocol
> installed.
> >   **/
> > @@ -1212,7 +1592,20 @@ HeapGuardCpuArchProtocolNotify (
> >     )
> >   {
> >     ASSERT (gCpu != NULL);
> > -  SetAllGuardPages ();
> > +
> > +  if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL) &&
> > +      IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    DEBUG ((DEBUG_ERROR, "Heap guard and freed memory guard cannot be
> enabled at the same time.\n"));
> > +    CpuDeadLoop ();
> > +  }
> > +
> > +  if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> > +    SetAllGuardPages ();
> > +  }
> > +
> > +  if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > +    GuardAllFreedPages ();
> > +  }
> >   }
> >
> >   /**
> > @@ -1264,6 +1657,10 @@ DumpGuardedMemoryBitmap (
> >     CHAR8     *Ruler1;
> >     CHAR8     *Ruler2;
> >
> > +  if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_ALL)) {
> > +    return;
> > +  }
> > +
> >     if (mGuardedMemoryMap == 0 ||
> >         mMapLevel == 0 ||
> >         mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > index 8c34692439..55a91ec098 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > @@ -1,7 +1,7 @@
> >   /** @file
> >     Data type, macros and function prototypes of heap guard feature.
> >
> > -Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2017-2018, Intel Corporation. All rights reserved.<BR>
> >   This program and the accompanying materials
> >   are licensed and made available under the terms and conditions of the BSD
> License
> >   which accompanies this distribution.  The full text of the license may be found
> at
> > @@ -160,6 +160,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
> ANY KIND, EITHER EXPRESS OR IMPLIED.
> >   //
> >   #define GUARD_HEAP_TYPE_PAGE        BIT0
> >   #define GUARD_HEAP_TYPE_POOL        BIT1
> > +#define GUARD_HEAP_TYPE_FREED       BIT4
> > +#define GUARD_HEAP_TYPE_ALL         \
> > +
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_FR
> EED)
> >
> >   //
> >   // Debug message level
> > @@ -392,11 +395,13 @@ AdjustPoolHeadF (
> >   /**
> >     Check to see if the heap guard is enabled for page and/or pool allocation.
> >
> > +  @param[in]  GuardType   Specify the sub-type(s) of Heap Guard.
> > +
> >     @return TRUE/FALSE.
> >   **/
> >   BOOLEAN
> >   IsHeapGuardEnabled (
> > -  VOID
> > +  UINT8           GuardType
> >     );
> >
> >   /**
> > @@ -407,6 +412,62 @@ HeapGuardCpuArchProtocolNotify (
> >     VOID
> >     );
> >
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  );
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if enabled.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPagesChecked (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  );
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Freed memory guard is used to detect Use-After-Free (UAF) memory issue,
> which
> > +  makes use of 'Used then throw away' way to detect any illegal access to
> freed
> > +  memory. The thrown-away memory will be marked as not-present so that
> any access
> > +  to those memory (after free) will be caught by page-fault exception.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  );
> > +
> >   extern BOOLEAN mOnGuarding;
> >
> >   #endif
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
> b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > index 3b4cc08e7c..961c5b8335 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > @@ -401,9 +401,12 @@ PromoteMemoryResource (
> >     VOID
> >     )
> >   {
> > -  LIST_ENTRY         *Link;
> > -  EFI_GCD_MAP_ENTRY  *Entry;
> > -  BOOLEAN            Promoted;
> > +  LIST_ENTRY                        *Link;
> > +  EFI_GCD_MAP_ENTRY                 *Entry;
> > +  BOOLEAN                           Promoted;
> > +  EFI_PHYSICAL_ADDRESS              StartAddress;
> > +  EFI_PHYSICAL_ADDRESS              EndAddress;
> > +  EFI_GCD_MEMORY_SPACE_DESCRIPTOR   Descriptor;
> >
> >     DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
> >
> > @@ -451,6 +454,24 @@ PromoteMemoryResource (
> >
> >     CoreReleaseGcdMemoryLock ();
> >
> > +  if (!Promoted) {
> > +    //
> > +    // If freed-memory guard is enabled, we could promote pages from
> > +    // guarded free pages.
> > +    //
> > +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> > +    if (Promoted) {
> > +      CoreGetMemorySpaceDescriptor (StartAddress, &Descriptor);
> > +      CoreAddRange (
> > +        EfiConventionalMemory,
> > +        StartAddress,
> > +        EndAddress,
> > +        Descriptor.Capabilities & ~(EFI_MEMORY_PRESENT |
> EFI_MEMORY_INITIALIZED |
> > +                                    EFI_MEMORY_TESTED | EFI_MEMORY_RUNTIME)
> > +        );
> > +    }
> > +  }
> > +
> >     return Promoted;
> >   }
> >   /**
> > @@ -896,9 +917,15 @@ CoreConvertPagesEx (
> >       }
> >
> >       //
> > -    // Add our new range in
> > +    // Add our new range in. Don't do this for freed pages if freed-memory
> > +    // guard is enabled.
> >       //
> > -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> > +        !ChangingType ||
> > +        MemType != EfiConventionalMemory) {
> > +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    }
> > +
> >       if (ChangingType && (MemType == EfiConventionalMemory)) {
> >         //
> >         // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
> this
> > @@ -1514,6 +1541,7 @@ CoreFreePages (
> >
> >     Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
> >     if (!EFI_ERROR (Status)) {
> > +    GuardFreedPagesChecked (Memory, NumberOfPages);
> >       CoreUpdateProfile (
> >         (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> >         MemoryProfileActionFreePages,
> > @@ -1908,9 +1936,7 @@ Done:
> >     *MemoryMapSize = BufferSize;
> >
> >     DEBUG_CODE (
> > -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> > -      DumpGuardedMemoryBitmap ();
> > -    }
> > +    DumpGuardedMemoryBitmap ();
> >     );
> >
> >     return Status;
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > index 1ff2061f7f..b9182ea807 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > @@ -1,7 +1,7 @@
> >   /** @file
> >     UEFI Memory pool management functions.
> >
> > -Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
> >   This program and the accompanying materials
> >   are licensed and made available under the terms and conditions of the BSD
> License
> >   which accompanies this distribution.  The full text of the license may be found
> at
> > @@ -26,7 +26,8 @@ typedef struct {
> >   } POOL_FREE;
> >
> >
> > -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> > +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> > +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
> >   typedef struct {
> >     UINT32          Signature;
> >     UINT32          Reserved;
> > @@ -367,6 +368,7 @@ CoreAllocatePoolI (
> >     UINTN       NoPages;
> >     UINTN       Granularity;
> >     BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >     ASSERT_LOCKED (&mPoolMemoryLock);
> >
> > @@ -386,6 +388,7 @@ CoreAllocatePoolI (
> >
> >     HasPoolTail  = !(NeedGuard &&
> >                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)
> && !mOnGuarding);
> >
> >     //
> >     // Adjusting the Size to be of proper alignment so that
> > @@ -406,7 +409,7 @@ CoreAllocatePoolI (
> >     // If allocation is over max size, just allocate pages for the request
> >     // (slow)
> >     //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
> >       if (!HasPoolTail) {
> >         Size -= sizeof (POOL_TAIL);
> >       }
> > @@ -498,7 +501,7 @@ Done:
> >       //
> >       // If we have a pool buffer, fill in the header & tail info
> >       //
> > -    Head->Signature = POOL_HEAD_SIGNATURE;
> > +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
> POOL_HEAD_SIGNATURE;
> >       Head->Size      = Size;
> >       Head->Type      = (EFI_MEMORY_TYPE) PoolType;
> >       Buffer          = Head->Data;
> > @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
> >     CoreFreePoolPages (Memory, NoPages);
> >     CoreReleaseMemoryLock ();
> >
> > +  GuardFreedPagesChecked (Memory, NoPages);
> >     ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
> >       (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
> >   }
> > @@ -685,15 +689,19 @@ CoreFreePoolI (
> >     UINTN       Granularity;
> >     BOOLEAN     IsGuarded;
> >     BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >     ASSERT(Buffer != NULL);
> >     //
> >     // Get the head & tail of the pool entry
> >     //
> > -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> > +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
> >     ASSERT(Head != NULL);
> >
> > -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> > +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> > +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> > +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> > +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >       return EFI_INVALID_PARAMETER;
> >     }
> >
> > @@ -701,6 +709,7 @@ CoreFreePoolI (
> >                   IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
> >     HasPoolTail = !(IsGuarded &&
> >                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >
> >     if (HasPoolTail) {
> >       Tail = HEAD_TO_TAIL (Head);
> > @@ -757,7 +766,7 @@ CoreFreePoolI (
> >     //
> >     // If it's not on the list, it must be pool pages
> >     //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
> >
> >       //
> >       // Return the memory pages back to free memory
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > index fa8f8fe91a..6298b67db1 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > @@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
> >     // Don't overwrite Guard pages, which should be the first and/or last page,
> >     // if any.
> >     //
> > -  if (IsHeapGuardEnabled ()) {
> > +  if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> >       if (IsGuardPage (Memory))  {
> >         Memory += EFI_PAGE_SIZE;
> >         Length -= EFI_PAGE_SIZE;
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > index 05eb4f422b..04cfb2dab2 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > @@ -1,7 +1,7 @@
> >   /** @file
> >     UEFI PropertiesTable support
> >
> > -Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR>
> >   This program and the accompanying materials
> >   are licensed and made available under the terms and conditions of the BSD
> License
> >   which accompanies this distribution.  The full text of the license may be found
> at
> > @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >   #include <Guid/PropertiesTable.h>
> >
> >   #include "DxeMain.h"
> > +#include "HeapGuard.h"
> >
> >   #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
> >     ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> > @@ -205,16 +206,13 @@ MergeMemoryMap (
> >       NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry,
> DescriptorSize);
> >
> >       do {
> > -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
> >NumberOfPages));
> > +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
> >PhysicalStart);
> > +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry-
> >NumberOfPages));
> >         if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> > -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> > -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        if (NewMemoryMapEntry != MemoryMapEntry) {
> > -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        }
> > -
> > +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
> &&
> > +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> >           NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> (NextMemoryMapEntry, DescriptorSize);
> >           continue;
> >         } else {
> >


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-10-26  1:20 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-10-25  7:17 [PATCH v4 0/6] Introduce freed-memory guard feature Jian J Wang
2018-10-25  7:18 ` [PATCH v4 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
2018-10-25  7:18 ` [PATCH v4 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD Jian J Wang
2018-10-25  7:18 ` [PATCH v4 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode Jian J Wang
2018-10-26  0:38   ` Dong, Eric
2018-10-25  7:18 ` [PATCH v4 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool Jian J Wang
2018-10-26  0:38   ` Dong, Eric
2018-10-25  7:18 ` [PATCH v4 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock Jian J Wang
2018-10-25  7:20   ` Wang, Jian J
2018-10-26  1:17     ` Zeng, Star
2018-10-26  1:19       ` Wang, Jian J
2018-10-25  7:18 ` [PATCH v4 6/6] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
2018-10-26  1:18   ` Zeng, Star
2018-10-26  1:20     ` Wang, Jian J
2018-10-26  1:14 ` [PATCH v4 0/6] Introduce " Zeng, Star

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox