public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [PATCH] MdeModulePkg/Core: allow HeapGuard even before CpuArchProtocol installed
@ 2018-03-14  8:31 Jian J Wang
  2018-03-15  1:09 ` Wang, Jian J
  0 siblings, 1 reply; 5+ messages in thread
From: Jian J Wang @ 2018-03-14  8:31 UTC (permalink / raw)
  To: edk2-devel; +Cc: Star Zeng, Eric Dong, Jiewen Yao

Due to the fact that HeapGuard needs CpuArchProtocol to update page
attributes, the feature is normally enabled after CpuArchProtocol is
installed. Since there're some drivers are loaded before CpuArchProtocl,
they cannot make use HeapGuard feature to detect potential issues.

This patch fixes above situation by updating the DXE core to skip the
NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
sure that they can be called but do nothing. This will allow HeapGuard to
record all guarded memory without setting the related Guard pages to not-
present.

Once the CpuArchProtocol is installed, a protocol notify will be called
to complete the work of setting Guard pages to not-present.

Please note that above changes will cause a #PF in GCD code during cleanup
of map entries, which is initiated by CpuDxe driver to update real mtrr
and paging attributes back to GCD. During that time, CpuDxe doesn't allow
GCD to update memory attributes and then any Guard page cannot be unset.
As a result, this will prevent Guarded memory from freeing during memory
map cleanup.

The solution is to avoid allocating guarded memory as memory map entries
in GCD code. It's done by setting global mOnGuarding to TRUE before memory
allocation and setting it back to FALSE afterwards in GCD function
CoreAllocateGcdMapEntry().

Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/Gcd/Gcd.c               |  10 ++
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 132 +++++++++++++++++++++++++-
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |   8 ++
 MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   5 +
 4 files changed, 154 insertions(+), 1 deletion(-)

diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
index 8fbc3d282c..77f4adb4bc 100644
--- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
+++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
@@ -16,6 +16,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 
 #include "DxeMain.h"
 #include "Gcd.h"
+#include "Mem/HeapGuard.h"
 
 #define MINIMUM_INITIAL_MEMORY_SIZE 0x10000
 
@@ -391,12 +392,21 @@ CoreAllocateGcdMapEntry (
   IN OUT EFI_GCD_MAP_ENTRY  **BottomEntry
   )
 {
+  //
+  // Set to mOnGuarding to TRUE before memory allocation. This will make sure
+  // that the entry memory is not "guarded" by HeapGuard. Otherwise it might
+  // cause problem when it's freed (if HeapGuard is enabled).
+  //
+  mOnGuarding = TRUE;
   *TopEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
+  mOnGuarding = FALSE;
   if (*TopEntry == NULL) {
     return EFI_OUT_OF_RESOURCES;
   }
 
+  mOnGuarding = TRUE;
   *BottomEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
+  mOnGuarding = FALSE;
   if (*BottomEntry == NULL) {
     CoreFreePool (*TopEntry);
     return EFI_OUT_OF_RESOURCES;
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
index 19245049c2..de2c468b83 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
@@ -576,6 +576,10 @@ SetGuardPage (
   IN  EFI_PHYSICAL_ADDRESS      BaseAddress
   )
 {
+  if (gCpu == NULL) {
+    return;
+  }
+
   //
   // Set flag to make sure allocating memory without GUARD for page table
   // operation; otherwise infinite loops could be caused.
@@ -606,6 +610,10 @@ UnsetGuardPage (
 {
   UINT64          Attributes;
 
+  if (gCpu == NULL) {
+    return;
+  }
+
   //
   // Once the Guard page is unset, it will be freed back to memory pool. NX
   // memory protection must be restored for this page if NX is enabled for free
@@ -652,7 +660,7 @@ IsMemoryTypeToGuard (
   UINT64 ConfigBit;
   BOOLEAN     InSmm;
 
-  if (gCpu == NULL || AllocateType == AllocateAddress) {
+  if (AllocateType == AllocateAddress) {
     return FALSE;
   }
 
@@ -1160,6 +1168,128 @@ CoreConvertPagesWithGuard (
   return CoreConvertPages (Start, NumberOfPages, NewType);
 }
 
+/**
+  Set all Guard pages which cannot be set before CPU Arch Protocol installed.
+**/
+VOID
+SetAllGuardPages (
+  VOID
+  )
+{
+  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    TableEntry;
+  UINT64    Address;
+  UINT64    GuardPage;
+  INTN      Level;
+  UINTN     Index;
+  BOOLEAN   OnGuarding;
+
+  if (mGuardedMemoryMap == 0 ||
+      mMapLevel == 0 ||
+      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
+    return;
+  }
+
+  CopyMem (Entries, mLevelMask, sizeof (Entries));
+  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
+
+  SetMem (Tables, sizeof(Tables), 0);
+  SetMem (Addresses, sizeof(Addresses), 0);
+  SetMem (Indices, sizeof(Indices), 0);
+
+  Level         = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+  Tables[Level] = mGuardedMemoryMap;
+  Address       = 0;
+  OnGuarding    = FALSE;
+
+  DEBUG_CODE (
+    DumpGuardedMemoryBitmap ();
+  );
+
+  while (TRUE) {
+    if (Indices[Level] > Entries[Level]) {
+      Tables[Level] = 0;
+      Level        -= 1;
+    } else {
+
+      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
+      Address     = Addresses[Level];
+
+      if (TableEntry == 0) {
+
+        OnGuarding = FALSE;
+
+      } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+
+        Level            += 1;
+        Tables[Level]     = TableEntry;
+        Addresses[Level]  = Address;
+        Indices[Level]    = 0;
+
+        continue;
+
+      } else {
+
+        Index = 0;
+        while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) {
+          if ((TableEntry & 1) == 1) {
+            if (OnGuarding) {
+              GuardPage = 0;
+            } else {
+              GuardPage = Address - EFI_PAGE_SIZE;
+            }
+            OnGuarding = TRUE;
+          } else {
+            if (OnGuarding) {
+              GuardPage = Address;
+            } else {
+              GuardPage = 0;
+            }
+            OnGuarding = FALSE;
+          }
+
+          if (GuardPage != 0) {
+            SetGuardPage (GuardPage);
+          }
+
+          if (TableEntry == 0) {
+            break;
+          }
+
+          TableEntry = RShiftU64 (TableEntry, 1);
+          Address   += EFI_PAGE_SIZE;
+          Index     += 1;
+        }
+      }
+    }
+
+    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
+      break;
+    }
+
+    Indices[Level] += 1;
+    Address = (Level == 0) ? 0 : Addresses[Level - 1];
+    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
+
+  }
+}
+
+/**
+  Notify function used to set all Guard pages before CPU Arch Protocol installed.
+**/
+VOID
+HeapGuardCpuArchProtocolNotify (
+  VOID
+  )
+{
+  ASSERT (gCpu != NULL);
+  SetAllGuardPages ();
+}
+
 /**
   Helper function to convert a UINT64 value in binary to a string.
 
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
index 7208ab1437..8c34692439 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
@@ -399,6 +399,14 @@ IsHeapGuardEnabled (
   VOID
   );
 
+/**
+  Notify function used to set all Guard pages after CPU Arch Protocol installed.
+**/
+VOID
+HeapGuardCpuArchProtocolNotify (
+  VOID
+  );
+
 extern BOOLEAN mOnGuarding;
 
 #endif
diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
index 407aece807..2f7e490af1 100644
--- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
+++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
@@ -1001,6 +1001,11 @@ MemoryProtectionCpuArchProtocolNotify (
     InitializeDxeNxMemoryProtectionPolicy ();
   }
 
+  //
+  // Call notify function meant for Heap Guard.
+  //
+  HeapGuardCpuArchProtocolNotify ();
+
   if (mImageProtectionPolicy == 0) {
     return;
   }
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] MdeModulePkg/Core: allow HeapGuard even before CpuArchProtocol installed
  2018-03-14  8:31 Jian J Wang
@ 2018-03-15  1:09 ` Wang, Jian J
  0 siblings, 0 replies; 5+ messages in thread
From: Wang, Jian J @ 2018-03-15  1:09 UTC (permalink / raw)
  To: Wang, Jian J, edk2-devel@lists.01.org; +Cc: Yao, Jiewen, Dong, Eric, Zeng, Star

There's a bit operation error found in 32-bit platform. I'll send a v2 patch later.

Regards,
Jian


> -----Original Message-----
> From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Jian J
> Wang
> Sent: Wednesday, March 14, 2018 4:31 PM
> To: edk2-devel@lists.01.org
> Cc: Yao, Jiewen <jiewen.yao@intel.com>; Dong, Eric <eric.dong@intel.com>;
> Zeng, Star <star.zeng@intel.com>
> Subject: [edk2] [PATCH] MdeModulePkg/Core: allow HeapGuard even before
> CpuArchProtocol installed
> 
> Due to the fact that HeapGuard needs CpuArchProtocol to update page
> attributes, the feature is normally enabled after CpuArchProtocol is
> installed. Since there're some drivers are loaded before CpuArchProtocl,
> they cannot make use HeapGuard feature to detect potential issues.
> 
> This patch fixes above situation by updating the DXE core to skip the
> NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
> NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
> sure that they can be called but do nothing. This will allow HeapGuard to
> record all guarded memory without setting the related Guard pages to not-
> present.
> 
> Once the CpuArchProtocol is installed, a protocol notify will be called
> to complete the work of setting Guard pages to not-present.
> 
> Please note that above changes will cause a #PF in GCD code during cleanup
> of map entries, which is initiated by CpuDxe driver to update real mtrr
> and paging attributes back to GCD. During that time, CpuDxe doesn't allow
> GCD to update memory attributes and then any Guard page cannot be unset.
> As a result, this will prevent Guarded memory from freeing during memory
> map cleanup.
> 
> The solution is to avoid allocating guarded memory as memory map entries
> in GCD code. It's done by setting global mOnGuarding to TRUE before memory
> allocation and setting it back to FALSE afterwards in GCD function
> CoreAllocateGcdMapEntry().
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>  MdeModulePkg/Core/Dxe/Gcd/Gcd.c               |  10 ++
>  MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 132
> +++++++++++++++++++++++++-
>  MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |   8 ++
>  MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   5 +
>  4 files changed, 154 insertions(+), 1 deletion(-)
> 
> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> index 8fbc3d282c..77f4adb4bc 100644
> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> @@ -16,6 +16,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> 
>  #include "DxeMain.h"
>  #include "Gcd.h"
> +#include "Mem/HeapGuard.h"
> 
>  #define MINIMUM_INITIAL_MEMORY_SIZE 0x10000
> 
> @@ -391,12 +392,21 @@ CoreAllocateGcdMapEntry (
>    IN OUT EFI_GCD_MAP_ENTRY  **BottomEntry
>    )
>  {
> +  //
> +  // Set to mOnGuarding to TRUE before memory allocation. This will make sure
> +  // that the entry memory is not "guarded" by HeapGuard. Otherwise it might
> +  // cause problem when it's freed (if HeapGuard is enabled).
> +  //
> +  mOnGuarding = TRUE;
>    *TopEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
> +  mOnGuarding = FALSE;
>    if (*TopEntry == NULL) {
>      return EFI_OUT_OF_RESOURCES;
>    }
> 
> +  mOnGuarding = TRUE;
>    *BottomEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
> +  mOnGuarding = FALSE;
>    if (*BottomEntry == NULL) {
>      CoreFreePool (*TopEntry);
>      return EFI_OUT_OF_RESOURCES;
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> index 19245049c2..de2c468b83 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> @@ -576,6 +576,10 @@ SetGuardPage (
>    IN  EFI_PHYSICAL_ADDRESS      BaseAddress
>    )
>  {
> +  if (gCpu == NULL) {
> +    return;
> +  }
> +
>    //
>    // Set flag to make sure allocating memory without GUARD for page table
>    // operation; otherwise infinite loops could be caused.
> @@ -606,6 +610,10 @@ UnsetGuardPage (
>  {
>    UINT64          Attributes;
> 
> +  if (gCpu == NULL) {
> +    return;
> +  }
> +
>    //
>    // Once the Guard page is unset, it will be freed back to memory pool. NX
>    // memory protection must be restored for this page if NX is enabled for free
> @@ -652,7 +660,7 @@ IsMemoryTypeToGuard (
>    UINT64 ConfigBit;
>    BOOLEAN     InSmm;
> 
> -  if (gCpu == NULL || AllocateType == AllocateAddress) {
> +  if (AllocateType == AllocateAddress) {
>      return FALSE;
>    }
> 
> @@ -1160,6 +1168,128 @@ CoreConvertPagesWithGuard (
>    return CoreConvertPages (Start, NumberOfPages, NewType);
>  }
> 
> +/**
> +  Set all Guard pages which cannot be set before CPU Arch Protocol installed.
> +**/
> +VOID
> +SetAllGuardPages (
> +  VOID
> +  )
> +{
> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    TableEntry;
> +  UINT64    Address;
> +  UINT64    GuardPage;
> +  INTN      Level;
> +  UINTN     Index;
> +  BOOLEAN   OnGuarding;
> +
> +  if (mGuardedMemoryMap == 0 ||
> +      mMapLevel == 0 ||
> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> +    return;
> +  }
> +
> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> +
> +  SetMem (Tables, sizeof(Tables), 0);
> +  SetMem (Addresses, sizeof(Addresses), 0);
> +  SetMem (Indices, sizeof(Indices), 0);
> +
> +  Level         = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +  Tables[Level] = mGuardedMemoryMap;
> +  Address       = 0;
> +  OnGuarding    = FALSE;
> +
> +  DEBUG_CODE (
> +    DumpGuardedMemoryBitmap ();
> +  );
> +
> +  while (TRUE) {
> +    if (Indices[Level] > Entries[Level]) {
> +      Tables[Level] = 0;
> +      Level        -= 1;
> +    } else {
> +
> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> +      Address     = Addresses[Level];
> +
> +      if (TableEntry == 0) {
> +
> +        OnGuarding = FALSE;
> +
> +      } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> +
> +        Level            += 1;
> +        Tables[Level]     = TableEntry;
> +        Addresses[Level]  = Address;
> +        Indices[Level]    = 0;
> +
> +        continue;
> +
> +      } else {
> +
> +        Index = 0;
> +        while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) {
> +          if ((TableEntry & 1) == 1) {
> +            if (OnGuarding) {
> +              GuardPage = 0;
> +            } else {
> +              GuardPage = Address - EFI_PAGE_SIZE;
> +            }
> +            OnGuarding = TRUE;
> +          } else {
> +            if (OnGuarding) {
> +              GuardPage = Address;
> +            } else {
> +              GuardPage = 0;
> +            }
> +            OnGuarding = FALSE;
> +          }
> +
> +          if (GuardPage != 0) {
> +            SetGuardPage (GuardPage);
> +          }
> +
> +          if (TableEntry == 0) {
> +            break;
> +          }
> +
> +          TableEntry = RShiftU64 (TableEntry, 1);
> +          Address   += EFI_PAGE_SIZE;
> +          Index     += 1;
> +        }
> +      }
> +    }
> +
> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> +      break;
> +    }
> +
> +    Indices[Level] += 1;
> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> +
> +  }
> +}
> +
> +/**
> +  Notify function used to set all Guard pages before CPU Arch Protocol installed.
> +**/
> +VOID
> +HeapGuardCpuArchProtocolNotify (
> +  VOID
> +  )
> +{
> +  ASSERT (gCpu != NULL);
> +  SetAllGuardPages ();
> +}
> +
>  /**
>    Helper function to convert a UINT64 value in binary to a string.
> 
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> index 7208ab1437..8c34692439 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> @@ -399,6 +399,14 @@ IsHeapGuardEnabled (
>    VOID
>    );
> 
> +/**
> +  Notify function used to set all Guard pages after CPU Arch Protocol installed.
> +**/
> +VOID
> +HeapGuardCpuArchProtocolNotify (
> +  VOID
> +  );
> +
>  extern BOOLEAN mOnGuarding;
> 
>  #endif
> diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> index 407aece807..2f7e490af1 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> @@ -1001,6 +1001,11 @@ MemoryProtectionCpuArchProtocolNotify (
>      InitializeDxeNxMemoryProtectionPolicy ();
>    }
> 
> +  //
> +  // Call notify function meant for Heap Guard.
> +  //
> +  HeapGuardCpuArchProtocolNotify ();
> +
>    if (mImageProtectionPolicy == 0) {
>      return;
>    }
> --
> 2.16.2.windows.1
> 
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH] MdeModulePkg/Core: allow HeapGuard even before CpuArchProtocol installed
@ 2018-03-15  2:27 Jian J Wang
  2018-03-15  3:46 ` Ni, Ruiyu
  0 siblings, 1 reply; 5+ messages in thread
From: Jian J Wang @ 2018-03-15  2:27 UTC (permalink / raw)
  To: edk2-devel; +Cc: Star Zeng, Eric Dong, Jiewen Yao

> v2 changes:
>    Fix a logic hole in bits operation on address on 64K boundary with
>    just 64-bit length (SetBits(), ClearBits(), GetBits()).

Due to the fact that HeapGuard needs CpuArchProtocol to update page
attributes, the feature is normally enabled after CpuArchProtocol is
installed. Since there're some drivers are loaded before CpuArchProtocl,
they cannot make use HeapGuard feature to detect potential issues.

This patch fixes above situation by updating the DXE core to skip the
NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
sure that they can be called but do nothing. This will allow HeapGuard to
record all guarded memory without setting the related Guard pages to not-
present.

Once the CpuArchProtocol is installed, a protocol notify will be called
to complete the work of setting Guard pages to not-present.

Please note that above changes will cause a #PF in GCD code during cleanup
of map entries, which is initiated by CpuDxe driver to update real mtrr
and paging attributes back to GCD. During that time, CpuDxe doesn't allow
GCD to update memory attributes and then any Guard page cannot be unset.
As a result, this will prevent Guarded memory from freeing during memory
map cleanup.

The solution is to avoid allocating guarded memory as memory map entries
in GCD code. It's done by setting global mOnGuarding to TRUE before memory
allocation and setting it back to FALSE afterwards in GCD function
CoreAllocateGcdMapEntry().

Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
 MdeModulePkg/Core/Dxe/Gcd/Gcd.c               |  10 ++
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 148 ++++++++++++++++++++++++--
 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |   8 ++
 MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   5 +
 MdeModulePkg/Core/PiSmmCore/HeapGuard.c       |  16 +--
 5 files changed, 174 insertions(+), 13 deletions(-)

diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
index 8fbc3d282c..77f4adb4bc 100644
--- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
+++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
@@ -16,6 +16,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 
 #include "DxeMain.h"
 #include "Gcd.h"
+#include "Mem/HeapGuard.h"
 
 #define MINIMUM_INITIAL_MEMORY_SIZE 0x10000
 
@@ -391,12 +392,21 @@ CoreAllocateGcdMapEntry (
   IN OUT EFI_GCD_MAP_ENTRY  **BottomEntry
   )
 {
+  //
+  // Set to mOnGuarding to TRUE before memory allocation. This will make sure
+  // that the entry memory is not "guarded" by HeapGuard. Otherwise it might
+  // cause problem when it's freed (if HeapGuard is enabled).
+  //
+  mOnGuarding = TRUE;
   *TopEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
+  mOnGuarding = FALSE;
   if (*TopEntry == NULL) {
     return EFI_OUT_OF_RESOURCES;
   }
 
+  mOnGuarding = TRUE;
   *BottomEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
+  mOnGuarding = FALSE;
   if (*BottomEntry == NULL) {
     CoreFreePool (*TopEntry);
     return EFI_OUT_OF_RESOURCES;
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
index ac043b5d9b..fd6aeee8da 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
@@ -70,7 +70,7 @@ SetBits (
   StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
   EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
 
-  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
     Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
               GUARDED_HEAP_MAP_ENTRY_BITS;
     Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
@@ -123,7 +123,7 @@ ClearBits (
   StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
   EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
 
-  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
     Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
               GUARDED_HEAP_MAP_ENTRY_BITS;
     Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
@@ -188,10 +188,14 @@ GetBits (
     Lsbs = 0;
   }
 
-  Result    = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
-  if (Lsbs > 0) {
-    BitMap  += 1;
-    Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
+  if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) {
+    Result = *BitMap;
+  } else {
+    Result    = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1);
+    if (Lsbs > 0) {
+      BitMap  += 1;
+      Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
+    }
   }
 
   return Result;
@@ -576,6 +580,10 @@ SetGuardPage (
   IN  EFI_PHYSICAL_ADDRESS      BaseAddress
   )
 {
+  if (gCpu == NULL) {
+    return;
+  }
+
   //
   // Set flag to make sure allocating memory without GUARD for page table
   // operation; otherwise infinite loops could be caused.
@@ -606,6 +614,10 @@ UnsetGuardPage (
 {
   UINT64          Attributes;
 
+  if (gCpu == NULL) {
+    return;
+  }
+
   //
   // Once the Guard page is unset, it will be freed back to memory pool. NX
   // memory protection must be restored for this page if NX is enabled for free
@@ -652,7 +664,7 @@ IsMemoryTypeToGuard (
   UINT64 ConfigBit;
   BOOLEAN     InSmm;
 
-  if (gCpu == NULL || AllocateType == AllocateAddress) {
+  if (AllocateType == AllocateAddress) {
     return FALSE;
   }
 
@@ -1164,6 +1176,128 @@ CoreConvertPagesWithGuard (
   return CoreConvertPages (Start, NumberOfPages, NewType);
 }
 
+/**
+  Set all Guard pages which cannot be set before CPU Arch Protocol installed.
+**/
+VOID
+SetAllGuardPages (
+  VOID
+  )
+{
+  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
+  UINT64    TableEntry;
+  UINT64    Address;
+  UINT64    GuardPage;
+  INTN      Level;
+  UINTN     Index;
+  BOOLEAN   OnGuarding;
+
+  if (mGuardedMemoryMap == 0 ||
+      mMapLevel == 0 ||
+      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
+    return;
+  }
+
+  CopyMem (Entries, mLevelMask, sizeof (Entries));
+  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
+
+  SetMem (Tables, sizeof(Tables), 0);
+  SetMem (Addresses, sizeof(Addresses), 0);
+  SetMem (Indices, sizeof(Indices), 0);
+
+  Level         = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+  Tables[Level] = mGuardedMemoryMap;
+  Address       = 0;
+  OnGuarding    = FALSE;
+
+  DEBUG_CODE (
+    DumpGuardedMemoryBitmap ();
+  );
+
+  while (TRUE) {
+    if (Indices[Level] > Entries[Level]) {
+      Tables[Level] = 0;
+      Level        -= 1;
+    } else {
+
+      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
+      Address     = Addresses[Level];
+
+      if (TableEntry == 0) {
+
+        OnGuarding = FALSE;
+
+      } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+
+        Level            += 1;
+        Tables[Level]     = TableEntry;
+        Addresses[Level]  = Address;
+        Indices[Level]    = 0;
+
+        continue;
+
+      } else {
+
+        Index = 0;
+        while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) {
+          if ((TableEntry & 1) == 1) {
+            if (OnGuarding) {
+              GuardPage = 0;
+            } else {
+              GuardPage = Address - EFI_PAGE_SIZE;
+            }
+            OnGuarding = TRUE;
+          } else {
+            if (OnGuarding) {
+              GuardPage = Address;
+            } else {
+              GuardPage = 0;
+            }
+            OnGuarding = FALSE;
+          }
+
+          if (GuardPage != 0) {
+            SetGuardPage (GuardPage);
+          }
+
+          if (TableEntry == 0) {
+            break;
+          }
+
+          TableEntry = RShiftU64 (TableEntry, 1);
+          Address   += EFI_PAGE_SIZE;
+          Index     += 1;
+        }
+      }
+    }
+
+    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
+      break;
+    }
+
+    Indices[Level] += 1;
+    Address = (Level == 0) ? 0 : Addresses[Level - 1];
+    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
+
+  }
+}
+
+/**
+  Notify function used to set all Guard pages before CPU Arch Protocol installed.
+**/
+VOID
+HeapGuardCpuArchProtocolNotify (
+  VOID
+  )
+{
+  ASSERT (gCpu != NULL);
+  SetAllGuardPages ();
+}
+
 /**
   Helper function to convert a UINT64 value in binary to a string.
 
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
index 7208ab1437..8c34692439 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
@@ -399,6 +399,14 @@ IsHeapGuardEnabled (
   VOID
   );
 
+/**
+  Notify function used to set all Guard pages after CPU Arch Protocol installed.
+**/
+VOID
+HeapGuardCpuArchProtocolNotify (
+  VOID
+  );
+
 extern BOOLEAN mOnGuarding;
 
 #endif
diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
index 407aece807..2f7e490af1 100644
--- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
+++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
@@ -1001,6 +1001,11 @@ MemoryProtectionCpuArchProtocolNotify (
     InitializeDxeNxMemoryProtectionPolicy ();
   }
 
+  //
+  // Call notify function meant for Heap Guard.
+  //
+  HeapGuardCpuArchProtocolNotify ();
+
   if (mImageProtectionPolicy == 0) {
     return;
   }
diff --git a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
index 923af93de2..f9657f9baa 100644
--- a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
+++ b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
@@ -73,7 +73,7 @@ SetBits (
   StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
   EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
 
-  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
     Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
               GUARDED_HEAP_MAP_ENTRY_BITS;
     Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
@@ -126,7 +126,7 @@ ClearBits (
   StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
   EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
 
-  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
     Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
               GUARDED_HEAP_MAP_ENTRY_BITS;
     Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
@@ -191,10 +191,14 @@ GetBits (
     Lsbs = 0;
   }
 
-  Result    = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
-  if (Lsbs > 0) {
-    BitMap  += 1;
-    Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
+  if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) {
+    Result = *BitMap;
+  } else {
+    Result    = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1);
+    if (Lsbs > 0) {
+      BitMap  += 1;
+      Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
+    }
   }
 
   return Result;
-- 
2.16.2.windows.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] MdeModulePkg/Core: allow HeapGuard even before CpuArchProtocol installed
  2018-03-15  2:27 [PATCH] MdeModulePkg/Core: allow HeapGuard even before CpuArchProtocol installed Jian J Wang
@ 2018-03-15  3:46 ` Ni, Ruiyu
  2018-03-15  5:04   ` Wang, Jian J
  0 siblings, 1 reply; 5+ messages in thread
From: Ni, Ruiyu @ 2018-03-15  3:46 UTC (permalink / raw)
  To: Jian J Wang, edk2-devel; +Cc: Jiewen Yao, Eric Dong, Star Zeng

On 3/15/2018 10:27 AM, Jian J Wang wrote:
>> v2 changes:
>>     Fix a logic hole in bits operation on address on 64K boundary with
>>     just 64-bit length (SetBits(), ClearBits(), GetBits()).
> 
> Due to the fact that HeapGuard needs CpuArchProtocol to update page
> attributes, the feature is normally enabled after CpuArchProtocol is
> installed. Since there're some drivers are loaded before CpuArchProtocl,
> they cannot make use HeapGuard feature to detect potential issues.
> 
> This patch fixes above situation by updating the DXE core to skip the
> NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
> NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
> sure that they can be called but do nothing. This will allow HeapGuard to
> record all guarded memory without setting the related Guard pages to not-
> present.
> 
> Once the CpuArchProtocol is installed, a protocol notify will be called
> to complete the work of setting Guard pages to not-present.
> 
> Please note that above changes will cause a #PF in GCD code during cleanup
> of map entries, which is initiated by CpuDxe driver to update real mtrr
> and paging attributes back to GCD. During that time, CpuDxe doesn't allow
> GCD to update memory attributes and then any Guard page cannot be unset.
> As a result, this will prevent Guarded memory from freeing during memory
> map cleanup.
> 
> The solution is to avoid allocating guarded memory as memory map entries
> in GCD code. It's done by setting global mOnGuarding to TRUE before memory
> allocation and setting it back to FALSE afterwards in GCD function
> CoreAllocateGcdMapEntry().
> 
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>   MdeModulePkg/Core/Dxe/Gcd/Gcd.c               |  10 ++
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 148 ++++++++++++++++++++++++--
>   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |   8 ++
>   MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   5 +
>   MdeModulePkg/Core/PiSmmCore/HeapGuard.c       |  16 +--
>   5 files changed, 174 insertions(+), 13 deletions(-)
> 
> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> index 8fbc3d282c..77f4adb4bc 100644
> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> @@ -16,6 +16,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>   
>   #include "DxeMain.h"
>   #include "Gcd.h"
> +#include "Mem/HeapGuard.h"
>   
>   #define MINIMUM_INITIAL_MEMORY_SIZE 0x10000
>   
> @@ -391,12 +392,21 @@ CoreAllocateGcdMapEntry (
>     IN OUT EFI_GCD_MAP_ENTRY  **BottomEntry
>     )
>   {
> +  //
> +  // Set to mOnGuarding to TRUE before memory allocation. This will make sure
> +  // that the entry memory is not "guarded" by HeapGuard. Otherwise it might
> +  // cause problem when it's freed (if HeapGuard is enabled).
> +  //
> +  mOnGuarding = TRUE;
>     *TopEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
> +  mOnGuarding = FALSE;
>     if (*TopEntry == NULL) {
>       return EFI_OUT_OF_RESOURCES;
>     }
>   
> +  mOnGuarding = TRUE;
>     *BottomEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
> +  mOnGuarding = FALSE;
>     if (*BottomEntry == NULL) {
>       CoreFreePool (*TopEntry);
>       return EFI_OUT_OF_RESOURCES;
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> index ac043b5d9b..fd6aeee8da 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> @@ -70,7 +70,7 @@ SetBits (
>     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
>     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
>   
> -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
>       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
>                 GUARDED_HEAP_MAP_ENTRY_BITS;
>       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> @@ -123,7 +123,7 @@ ClearBits (
>     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
>     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
>   
> -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
>       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
>                 GUARDED_HEAP_MAP_ENTRY_BITS;
>       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> @@ -188,10 +188,14 @@ GetBits (
>       Lsbs = 0;
>     }
>   
> -  Result    = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
> -  if (Lsbs > 0) {
> -    BitMap  += 1;
> -    Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> +  if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) {
> +    Result = *BitMap;
> +  } else {
> +    Result    = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1);
> +    if (Lsbs > 0) {
> +      BitMap  += 1;
> +      Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> +    }
>     }
>   
>     return Result;
> @@ -576,6 +580,10 @@ SetGuardPage (
>     IN  EFI_PHYSICAL_ADDRESS      BaseAddress
>     )
>   {
> +  if (gCpu == NULL) {
> +    return;
> +  }
> +
>     //
>     // Set flag to make sure allocating memory without GUARD for page table
>     // operation; otherwise infinite loops could be caused.
> @@ -606,6 +614,10 @@ UnsetGuardPage (
>   {
>     UINT64          Attributes;
>   
> +  if (gCpu == NULL) {
> +    return;
> +  }
> +
>     //
>     // Once the Guard page is unset, it will be freed back to memory pool. NX
>     // memory protection must be restored for this page if NX is enabled for free
> @@ -652,7 +664,7 @@ IsMemoryTypeToGuard (
>     UINT64 ConfigBit;
>     BOOLEAN     InSmm;
>   
> -  if (gCpu == NULL || AllocateType == AllocateAddress) {
> +  if (AllocateType == AllocateAddress) {
>       return FALSE;
>     }
>   
> @@ -1164,6 +1176,128 @@ CoreConvertPagesWithGuard (
>     return CoreConvertPages (Start, NumberOfPages, NewType);
>   }
>   
> +/**
> +  Set all Guard pages which cannot be set before CPU Arch Protocol installed.
> +**/
> +VOID
> +SetAllGuardPages (
> +  VOID
> +  )
> +{
> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> +  UINT64    TableEntry;
> +  UINT64    Address;
> +  UINT64    GuardPage;
> +  INTN      Level;
> +  UINTN     Index;
> +  BOOLEAN   OnGuarding;
> +
> +  if (mGuardedMemoryMap == 0 ||
> +      mMapLevel == 0 ||
> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> +    return;
> +  }
> +
> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> +
> +  SetMem (Tables, sizeof(Tables), 0);
> +  SetMem (Addresses, sizeof(Addresses), 0);
> +  SetMem (Indices, sizeof(Indices), 0);
> +
> +  Level         = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> +  Tables[Level] = mGuardedMemoryMap;
> +  Address       = 0;
> +  OnGuarding    = FALSE;
> +
> +  DEBUG_CODE (
> +    DumpGuardedMemoryBitmap ();
> +  );
> +
> +  while (TRUE) {
> +    if (Indices[Level] > Entries[Level]) {
> +      Tables[Level] = 0;
> +      Level        -= 1;
> +    } else {
> +
> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> +      Address     = Addresses[Level];
> +
> +      if (TableEntry == 0) {
> +
> +        OnGuarding = FALSE;
> +
> +      } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> +
> +        Level            += 1;
> +        Tables[Level]     = TableEntry;
> +        Addresses[Level]  = Address;
> +        Indices[Level]    = 0;
> +
> +        continue;
> +
> +      } else {
> +
> +        Index = 0;
> +        while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) {
> +          if ((TableEntry & 1) == 1) {
> +            if (OnGuarding) {
> +              GuardPage = 0;
> +            } else {
> +              GuardPage = Address - EFI_PAGE_SIZE;
> +            }
> +            OnGuarding = TRUE;
> +          } else {
> +            if (OnGuarding) {
> +              GuardPage = Address;
> +            } else {
> +              GuardPage = 0;
> +            }
> +            OnGuarding = FALSE;
> +          }
> +
> +          if (GuardPage != 0) {
> +            SetGuardPage (GuardPage);
> +          }
> +
> +          if (TableEntry == 0) {
> +            break;
> +          }
> +
> +          TableEntry = RShiftU64 (TableEntry, 1);
> +          Address   += EFI_PAGE_SIZE;
> +          Index     += 1;
> +        }
> +      }
> +    }
> +
> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> +      break;
> +    }
> +
> +    Indices[Level] += 1;
> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> +
> +  }
> +}
> +
> +/**
> +  Notify function used to set all Guard pages before CPU Arch Protocol installed.
> +**/
> +VOID
> +HeapGuardCpuArchProtocolNotify (
> +  VOID
> +  )
> +{
> +  ASSERT (gCpu != NULL);
> +  SetAllGuardPages ();
> +}
> +
>   /**
>     Helper function to convert a UINT64 value in binary to a string.
>   
> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> index 7208ab1437..8c34692439 100644
> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> @@ -399,6 +399,14 @@ IsHeapGuardEnabled (
>     VOID
>     );
>   
> +/**
> +  Notify function used to set all Guard pages after CPU Arch Protocol installed.
> +**/
> +VOID
> +HeapGuardCpuArchProtocolNotify (
> +  VOID
> +  );
> +
>   extern BOOLEAN mOnGuarding;
>   
>   #endif
> diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> index 407aece807..2f7e490af1 100644
> --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> @@ -1001,6 +1001,11 @@ MemoryProtectionCpuArchProtocolNotify (
>       InitializeDxeNxMemoryProtectionPolicy ();
>     }
>   
> +  //
> +  // Call notify function meant for Heap Guard.
> +  //
> +  HeapGuardCpuArchProtocolNotify ();
> +
>     if (mImageProtectionPolicy == 0) {
>       return;
>     }
> diff --git a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> index 923af93de2..f9657f9baa 100644
> --- a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> +++ b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> @@ -73,7 +73,7 @@ SetBits (
>     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
>     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
>   
> -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
>       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
>                 GUARDED_HEAP_MAP_ENTRY_BITS;
>       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> @@ -126,7 +126,7 @@ ClearBits (
>     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
>     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
>   
> -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
>       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
>                 GUARDED_HEAP_MAP_ENTRY_BITS;
>       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> @@ -191,10 +191,14 @@ GetBits (
>       Lsbs = 0;
>     }
>   
> -  Result    = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
> -  if (Lsbs > 0) {
> -    BitMap  += 1;
> -    Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> +  if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) {
> +    Result = *BitMap;
> +  } else {
> +    Result    = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1);
> +    if (Lsbs > 0) {
> +      BitMap  += 1;
> +      Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> +    }
>     }
>   
>     return Result;
> 
Jian,
I think at least the single patch could be split to three patches.
Two of them fix the ">=" issue in DxeCore and PiSmmCore.
One of them fix the heapguard gap.

-- 
Thanks,
Ray


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] MdeModulePkg/Core: allow HeapGuard even before CpuArchProtocol installed
  2018-03-15  3:46 ` Ni, Ruiyu
@ 2018-03-15  5:04   ` Wang, Jian J
  0 siblings, 0 replies; 5+ messages in thread
From: Wang, Jian J @ 2018-03-15  5:04 UTC (permalink / raw)
  To: Ni, Ruiyu, edk2-devel@lists.01.org; +Cc: Yao, Jiewen, Dong, Eric, Zeng, Star

Thanks catching this. New patch has sent out.

Regards,
Jian


> -----Original Message-----
> From: Ni, Ruiyu
> Sent: Thursday, March 15, 2018 11:47 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Yao, Jiewen <jiewen.yao@intel.com>; Dong, Eric <eric.dong@intel.com>;
> Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH] MdeModulePkg/Core: allow HeapGuard even
> before CpuArchProtocol installed
> 
> On 3/15/2018 10:27 AM, Jian J Wang wrote:
> >> v2 changes:
> >>     Fix a logic hole in bits operation on address on 64K boundary with
> >>     just 64-bit length (SetBits(), ClearBits(), GetBits()).
> >
> > Due to the fact that HeapGuard needs CpuArchProtocol to update page
> > attributes, the feature is normally enabled after CpuArchProtocol is
> > installed. Since there're some drivers are loaded before CpuArchProtocl,
> > they cannot make use HeapGuard feature to detect potential issues.
> >
> > This patch fixes above situation by updating the DXE core to skip the
> > NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
> > NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
> > sure that they can be called but do nothing. This will allow HeapGuard to
> > record all guarded memory without setting the related Guard pages to not-
> > present.
> >
> > Once the CpuArchProtocol is installed, a protocol notify will be called
> > to complete the work of setting Guard pages to not-present.
> >
> > Please note that above changes will cause a #PF in GCD code during cleanup
> > of map entries, which is initiated by CpuDxe driver to update real mtrr
> > and paging attributes back to GCD. During that time, CpuDxe doesn't allow
> > GCD to update memory attributes and then any Guard page cannot be unset.
> > As a result, this will prevent Guarded memory from freeing during memory
> > map cleanup.
> >
> > The solution is to avoid allocating guarded memory as memory map entries
> > in GCD code. It's done by setting global mOnGuarding to TRUE before memory
> > allocation and setting it back to FALSE afterwards in GCD function
> > CoreAllocateGcdMapEntry().
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Eric Dong <eric.dong@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >   MdeModulePkg/Core/Dxe/Gcd/Gcd.c               |  10 ++
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 148
> ++++++++++++++++++++++++--
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |   8 ++
> >   MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   5 +
> >   MdeModulePkg/Core/PiSmmCore/HeapGuard.c       |  16 +--
> >   5 files changed, 174 insertions(+), 13 deletions(-)
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > index 8fbc3d282c..77f4adb4bc 100644
> > --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > @@ -16,6 +16,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >
> >   #include "DxeMain.h"
> >   #include "Gcd.h"
> > +#include "Mem/HeapGuard.h"
> >
> >   #define MINIMUM_INITIAL_MEMORY_SIZE 0x10000
> >
> > @@ -391,12 +392,21 @@ CoreAllocateGcdMapEntry (
> >     IN OUT EFI_GCD_MAP_ENTRY  **BottomEntry
> >     )
> >   {
> > +  //
> > +  // Set to mOnGuarding to TRUE before memory allocation. This will make
> sure
> > +  // that the entry memory is not "guarded" by HeapGuard. Otherwise it might
> > +  // cause problem when it's freed (if HeapGuard is enabled).
> > +  //
> > +  mOnGuarding = TRUE;
> >     *TopEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
> > +  mOnGuarding = FALSE;
> >     if (*TopEntry == NULL) {
> >       return EFI_OUT_OF_RESOURCES;
> >     }
> >
> > +  mOnGuarding = TRUE;
> >     *BottomEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
> > +  mOnGuarding = FALSE;
> >     if (*BottomEntry == NULL) {
> >       CoreFreePool (*TopEntry);
> >       return EFI_OUT_OF_RESOURCES;
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > index ac043b5d9b..fd6aeee8da 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > @@ -70,7 +70,7 @@ SetBits (
> >     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> >     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> >
> > -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
> >       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> >                 GUARDED_HEAP_MAP_ENTRY_BITS;
> >       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > @@ -123,7 +123,7 @@ ClearBits (
> >     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> >     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> >
> > -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
> >       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> >                 GUARDED_HEAP_MAP_ENTRY_BITS;
> >       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > @@ -188,10 +188,14 @@ GetBits (
> >       Lsbs = 0;
> >     }
> >
> > -  Result    = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
> > -  if (Lsbs > 0) {
> > -    BitMap  += 1;
> > -    Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> > +  if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +    Result = *BitMap;
> > +  } else {
> > +    Result    = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1);
> > +    if (Lsbs > 0) {
> > +      BitMap  += 1;
> > +      Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> > +    }
> >     }
> >
> >     return Result;
> > @@ -576,6 +580,10 @@ SetGuardPage (
> >     IN  EFI_PHYSICAL_ADDRESS      BaseAddress
> >     )
> >   {
> > +  if (gCpu == NULL) {
> > +    return;
> > +  }
> > +
> >     //
> >     // Set flag to make sure allocating memory without GUARD for page table
> >     // operation; otherwise infinite loops could be caused.
> > @@ -606,6 +614,10 @@ UnsetGuardPage (
> >   {
> >     UINT64          Attributes;
> >
> > +  if (gCpu == NULL) {
> > +    return;
> > +  }
> > +
> >     //
> >     // Once the Guard page is unset, it will be freed back to memory pool. NX
> >     // memory protection must be restored for this page if NX is enabled for
> free
> > @@ -652,7 +664,7 @@ IsMemoryTypeToGuard (
> >     UINT64 ConfigBit;
> >     BOOLEAN     InSmm;
> >
> > -  if (gCpu == NULL || AllocateType == AllocateAddress) {
> > +  if (AllocateType == AllocateAddress) {
> >       return FALSE;
> >     }
> >
> > @@ -1164,6 +1176,128 @@ CoreConvertPagesWithGuard (
> >     return CoreConvertPages (Start, NumberOfPages, NewType);
> >   }
> >
> > +/**
> > +  Set all Guard pages which cannot be set before CPU Arch Protocol installed.
> > +**/
> > +VOID
> > +SetAllGuardPages (
> > +  VOID
> > +  )
> > +{
> > +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    TableEntry;
> > +  UINT64    Address;
> > +  UINT64    GuardPage;
> > +  INTN      Level;
> > +  UINTN     Index;
> > +  BOOLEAN   OnGuarding;
> > +
> > +  if (mGuardedMemoryMap == 0 ||
> > +      mMapLevel == 0 ||
> > +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > +    return;
> > +  }
> > +
> > +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> > +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> > +
> > +  SetMem (Tables, sizeof(Tables), 0);
> > +  SetMem (Addresses, sizeof(Addresses), 0);
> > +  SetMem (Indices, sizeof(Indices), 0);
> > +
> > +  Level         = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +  Tables[Level] = mGuardedMemoryMap;
> > +  Address       = 0;
> > +  OnGuarding    = FALSE;
> > +
> > +  DEBUG_CODE (
> > +    DumpGuardedMemoryBitmap ();
> > +  );
> > +
> > +  while (TRUE) {
> > +    if (Indices[Level] > Entries[Level]) {
> > +      Tables[Level] = 0;
> > +      Level        -= 1;
> > +    } else {
> > +
> > +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> > +      Address     = Addresses[Level];
> > +
> > +      if (TableEntry == 0) {
> > +
> > +        OnGuarding = FALSE;
> > +
> > +      } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +
> > +        Level            += 1;
> > +        Tables[Level]     = TableEntry;
> > +        Addresses[Level]  = Address;
> > +        Indices[Level]    = 0;
> > +
> > +        continue;
> > +
> > +      } else {
> > +
> > +        Index = 0;
> > +        while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +          if ((TableEntry & 1) == 1) {
> > +            if (OnGuarding) {
> > +              GuardPage = 0;
> > +            } else {
> > +              GuardPage = Address - EFI_PAGE_SIZE;
> > +            }
> > +            OnGuarding = TRUE;
> > +          } else {
> > +            if (OnGuarding) {
> > +              GuardPage = Address;
> > +            } else {
> > +              GuardPage = 0;
> > +            }
> > +            OnGuarding = FALSE;
> > +          }
> > +
> > +          if (GuardPage != 0) {
> > +            SetGuardPage (GuardPage);
> > +          }
> > +
> > +          if (TableEntry == 0) {
> > +            break;
> > +          }
> > +
> > +          TableEntry = RShiftU64 (TableEntry, 1);
> > +          Address   += EFI_PAGE_SIZE;
> > +          Index     += 1;
> > +        }
> > +      }
> > +    }
> > +
> > +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > +      break;
> > +    }
> > +
> > +    Indices[Level] += 1;
> > +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> > +
> > +  }
> > +}
> > +
> > +/**
> > +  Notify function used to set all Guard pages before CPU Arch Protocol
> installed.
> > +**/
> > +VOID
> > +HeapGuardCpuArchProtocolNotify (
> > +  VOID
> > +  )
> > +{
> > +  ASSERT (gCpu != NULL);
> > +  SetAllGuardPages ();
> > +}
> > +
> >   /**
> >     Helper function to convert a UINT64 value in binary to a string.
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > index 7208ab1437..8c34692439 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > @@ -399,6 +399,14 @@ IsHeapGuardEnabled (
> >     VOID
> >     );
> >
> > +/**
> > +  Notify function used to set all Guard pages after CPU Arch Protocol installed.
> > +**/
> > +VOID
> > +HeapGuardCpuArchProtocolNotify (
> > +  VOID
> > +  );
> > +
> >   extern BOOLEAN mOnGuarding;
> >
> >   #endif
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > index 407aece807..2f7e490af1 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > @@ -1001,6 +1001,11 @@ MemoryProtectionCpuArchProtocolNotify (
> >       InitializeDxeNxMemoryProtectionPolicy ();
> >     }
> >
> > +  //
> > +  // Call notify function meant for Heap Guard.
> > +  //
> > +  HeapGuardCpuArchProtocolNotify ();
> > +
> >     if (mImageProtectionPolicy == 0) {
> >       return;
> >     }
> > diff --git a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> > index 923af93de2..f9657f9baa 100644
> > --- a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> > +++ b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> > @@ -73,7 +73,7 @@ SetBits (
> >     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> >     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> >
> > -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
> >       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> >                 GUARDED_HEAP_MAP_ENTRY_BITS;
> >       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > @@ -126,7 +126,7 @@ ClearBits (
> >     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> >     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> >
> > -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
> >       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> >                 GUARDED_HEAP_MAP_ENTRY_BITS;
> >       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > @@ -191,10 +191,14 @@ GetBits (
> >       Lsbs = 0;
> >     }
> >
> > -  Result    = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
> > -  if (Lsbs > 0) {
> > -    BitMap  += 1;
> > -    Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> > +  if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +    Result = *BitMap;
> > +  } else {
> > +    Result    = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1);
> > +    if (Lsbs > 0) {
> > +      BitMap  += 1;
> > +      Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> > +    }
> >     }
> >
> >     return Result;
> >
> Jian,
> I think at least the single patch could be split to three patches.
> Two of them fix the ">=" issue in DxeCore and PiSmmCore.
> One of them fix the heapguard gap.
> 
> --
> Thanks,
> Ray

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-03-15  4:58 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-03-15  2:27 [PATCH] MdeModulePkg/Core: allow HeapGuard even before CpuArchProtocol installed Jian J Wang
2018-03-15  3:46 ` Ni, Ruiyu
2018-03-15  5:04   ` Wang, Jian J
  -- strict thread matches above, loose matches on Subject: below --
2018-03-14  8:31 Jian J Wang
2018-03-15  1:09 ` Wang, Jian J

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox