public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Wang, Jian J" <jian.j.wang@intel.com>
To: Laszlo Ersek <lersek@redhat.com>,
	"edk2-devel@lists.01.org" <edk2-devel@lists.01.org>
Cc: "Zeng, Star" <star.zeng@intel.com>,
	"Kinney, Michael D" <michael.d.kinney@intel.com>,
	"Yao, Jiewen" <jiewen.yao@intel.com>,
	"Ni, Ruiyu" <ruiyu.ni@intel.com>
Subject: Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
Date: Mon, 22 Oct 2018 07:34:09 +0000	[thread overview]
Message-ID: <D827630B58408649ACB04F44C510003624E85C1C@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <76de0333-60d0-e474-7db3-1a4ff49de0d0@redhat.com>

Hi Laszlo,

Regards,
Jian


> -----Original Message-----
> From: Laszlo Ersek [mailto:lersek@redhat.com]
> Sent: Friday, October 19, 2018 8:05 PM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Zeng, Star <star.zeng@intel.com>; Kinney, Michael D
> <michael.d.kinney@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>
> Subject: Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory
> detection
> 
> On 10/19/18 03:50, Jian J Wang wrote:
> > UAF (Use-After-Free) memory detection is new feature introduced to
> > detect illegal access to memory which has been freed. The principle
> > behind is similar to heap guard feature, that is the core turn all pool
> > memory allocation to page allocation and mark them to be not-present
> > once they are freed.
> >
> > This also implies that, once a page is allocated and freed, it cannot
> > be re-allocated. This will bring another issue, which is that there's
> > risk that memory space will be used out. To address it, the memory
> > service add logic to put part (at most 64 pages a time) of freed pages
> > back into page pool, so that the memory service can still have memory
> > to allocate, when all memory space have been allocated once. This is
> > called memory promotion. The promoted pages are always from the eldest
> > pages which haven been freed.
> >
> > To use this feature, one can simply set following PCD to 1
> >   gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> >
> > Please note this feature cannot be used with heap guard feature controlled
> > by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> >  MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
> >  MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
> >  MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
> >  MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
> ++++++++++++++++++++++++++-
> >  MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
> >  MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
> >  MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
> >  MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
> >  8 files changed, 519 insertions(+), 26 deletions(-)
> 
> I don't think I can review this patch well (I'm not an MdeModulePkg
> reviewer). However I do think this patch is way too large. I suggest
> splitting out everything that counts as "preparation" to separate patches.
> 
> For example:

Any comments are welcome.

> 
> > diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
> b/MdeModulePkg/Core/Dxe/DxeMain.h
> > index 2dec9da5e3..ae75cc5b25 100644
> > --- a/MdeModulePkg/Core/Dxe/DxeMain.h
> > +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
> > @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >  #include <Library/DxeServicesLib.h>
> >  #include <Library/DebugAgentLib.h>
> >  #include <Library/CpuExceptionHandlerLib.h>
> > +#include <Library/DebugPrintErrorLevelLib.h>
> >
> >
> >  //
> > diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
> b/MdeModulePkg/Core/Dxe/DxeMain.inf
> > index 10375443c0..d91258c087 100644
> > --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
> > +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
> > @@ -198,6 +198,7 @@
> >    gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType                       ##
> CONSUMES
> >    gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> ## CONSUMES
> >    gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ##
> CONSUMES
> > +
> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> ## CONSUMES
> >
> >  # [Hob]
> >  # RESOURCE_DESCRIPTOR   ## CONSUMES
> > diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > index d9c65a8045..e43ec74010 100644
> > --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
> >      EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
> >      UINTN                            Index;
> >
> > +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> > +      return;
> > +    }
> > +
> >      Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
> &MemorySpaceMap);
> >      ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
> >
> 
> this change doesn't belong to the UAF feature introduction. This change
> streamlines some debug code, which may or may not be useful, but either
> way, it should be presented separately, and discussed separately.
> 

There's issue here caused by introduction of UAF (lock reentrance, see below).
But you're right I should put them in separate patch.

   CoreDumpGcdMemorySpaceMap()
=> CoreGetMemorySpaceMap()
=> CoreAcquireGcdMemoryLock () *
   AllocatePool()
=> InternalAllocatePool()
=> CoreAllocatePool()
=> CoreAllocatePoolI()
=> CoreAllocatePoolPagesI()
=> CoreAllocatePoolPages()
=> FindFreePages()
=> PromoteMemoryResource()
=> CoreAcquireGcdMemoryLock() **

> > @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
> >      EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
> >      UINTN                        Index;
> >
> > +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> > +      return;
> > +    }
> > +
> >      Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
> >      ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > index 663f969c0d..b120c04f8f 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >  GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >                                      = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> >
> > +//
> > +// Used for Use-After-Free memory detection to promote freed but not used
> pages.
> > +//
> > +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
> mLastPromotedPage = BASE_4GB;
> > +
> >  /**
> >    Set corresponding bits in bitmap table to 1 according to the address.
> >
> > @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
> >
> >    @return An integer containing the guarded memory bitmap.
> >  **/
> > -UINTN
> > +UINT64
> >  GetGuardedMemoryBits (
> >    IN EFI_PHYSICAL_ADDRESS    Address,
> >    IN UINTN                   NumberOfPages
> > @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
> >  {
> >    UINT64            *BitMap;
> >    UINTN             Bits;
> > -  UINTN             Result;
> > +  UINT64            Result;
> >    UINTN             Shift;
> >    UINTN             BitsToUnitEnd;
> >
> > @@ -1203,6 +1208,373 @@ SetAllGuardPages (
> >    }
> >  }
> >
> > +/**
> > +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
> > +
> > +  @return TRUE/FALSE.
> > +**/
> > +BOOLEAN
> > +IsUafEnabled (
> > +  VOID
> > +  )
> > +{
> > +  ASSERT (!IsHeapGuardEnabled());
> 
> This seems strange. Do you really mean to assert that the heap guard is
> disabled *before* you check the UAF PCD setting?
> 
> ... Indeed, I see some confusing conditions related to this, below:
> 

I also found the logic here is not right after sending out the patch. I'll fix it.

> > +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
> > +}
> > +
> > +/**
> > +  Find the address of top-most guarded free page.
> > +
> > +  @param[out]  Address    Start address of top-most guarded free page.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +GetLastGuardedFreePageAddress (
> > +  OUT EFI_PHYSICAL_ADDRESS      *Address
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> > +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> > +  UINTN                   Level;
> > +  UINT64                  Map;
> > +  INTN                    Index;
> > +
> > +  ASSERT (mMapLevel >= 1);
> > +
> > +  BaseAddress = 0;
> > +  Map = mGuardedMemoryMap;
> > +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> > +       ++Level) {
> > +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> > +
> > +    //
> > +    // Find the non-NULL entry at largest index.
> > +    //
> > +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> > +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> > +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> > +        Map = ((UINT64 *)(UINTN)Map)[Index];
> > +        break;
> > +      }
> > +    }
> > +  }
> > +
> > +  //
> > +  // Find the non-zero MSB then get the page address.
> > +  //
> > +  while (Map != 0) {
> > +    Map = RShiftU64 (Map, 1);
> > +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> > +  }
> > +
> > +  *Address = BaseAddress;
> > +}
> > +
> > +/**
> > +  Record freed pages.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +MarkFreedPages (
> > +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> > +  IN UINTN                    Pages
> > +  )
> > +{
> > +  SetGuardedMemoryBits (BaseAddress, Pages);
> > +}
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if possible.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  )
> > +{
> > +  EFI_STATUS      Status;
> > +
> > +  //
> > +  // Do nothing if
> > +  // - Use-After-Free detection is not enabled
> > +  // - Freed memory is legacy memory lower than 1MB
> > +  //
> > +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
> > +    return;
> > +  }
> 
> OK, so remember this: GuardFreedPages() calls IsUafEnabled().
> 
> > +
> > +  MarkFreedPages (BaseAddress, Pages);
> > +  if (gCpu != NULL) {
> > +    //
> > +    // Set flag to make sure allocating memory without GUARD for page table
> > +    // operation; otherwise infinite loops could be caused.
> > +    //
> > +    mOnGuarding = TRUE;
> > +    //
> > +    // Note: This might overwrite other attributes needed by other features,
> > +    // such as NX memory protection.
> > +    //
> > +    Status = gCpu->SetMemoryAttributes (
> > +                     gCpu,
> > +                     BaseAddress,
> > +                     EFI_PAGES_TO_SIZE(Pages),
> > +                     EFI_MEMORY_RP
> > +                     );
> > +    //
> > +    // Normally we should check the returned Status. But there might be
> memory
> > +    // alloc/free involved in SetMemoryAttributes(), which would fail this
> > +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> > +    //
> > +    if (EFI_ERROR (Status)) {
> > +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n",
> BaseAddress, Pages));
> > +    }
> > +    mOnGuarding = FALSE;
> > +  }
> > +}
> > +
> > +/**
> > +  Mark all pages freed before CPU Arch Protocol as not-present.
> > +
> > +**/
> > +VOID
> > +GuardAllFreedPages (
> > +  VOID
> > +  )
> > +{
> > +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    TableEntry;
> > +  UINT64    Address;
> > +  UINT64    GuardPage;
> > +  INTN      Level;
> > +  UINTN     BitIndex;
> > +  UINTN     GuardPageNumber;
> > +
> > +  if (mGuardedMemoryMap == 0 ||
> > +      mMapLevel == 0 ||
> > +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > +    return;
> > +  }
> > +
> > +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> > +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> > +
> > +  SetMem (Tables, sizeof(Tables), 0);
> > +  SetMem (Addresses, sizeof(Addresses), 0);
> > +  SetMem (Indices, sizeof(Indices), 0);
> > +
> > +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +  Tables[Level]   = mGuardedMemoryMap;
> > +  Address         = 0;
> > +  GuardPage       = (UINT64)-1;
> > +  GuardPageNumber = 0;
> > +
> > +  while (TRUE) {
> > +    if (Indices[Level] > Entries[Level]) {
> > +      Tables[Level] = 0;
> > +      Level        -= 1;
> > +    } else {
> > +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> > +      Address     = Addresses[Level];
> > +
> > +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +        Level            += 1;
> > +        Tables[Level]     = TableEntry;
> > +        Addresses[Level]  = Address;
> > +        Indices[Level]    = 0;
> > +
> > +        continue;
> > +      } else {
> > +        BitIndex = 1;
> > +        while (BitIndex != 0) {
> > +          if ((TableEntry & BitIndex) != 0) {
> > +            if (GuardPage == (UINT64)-1) {
> > +              GuardPage = Address;
> > +            }
> > +            ++GuardPageNumber;
> > +          } else if (GuardPageNumber > 0) {
> > +            GuardFreedPages (GuardPage, GuardPageNumber);
> 
> Then GuardAllFreedPages() calls GuardFreedPages().
> 
> > +            GuardPageNumber = 0;
> > +            GuardPage       = (UINT64)-1;
> > +          }
> > +
> > +          if (TableEntry == 0) {
> > +            break;
> > +          }
> > +
> > +          Address += EFI_PAGES_TO_SIZE (1);
> > +          BitIndex = LShiftU64(BitIndex, 1);
> > +        }
> > +      }
> > +    }
> > +
> > +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > +      break;
> > +    }
> > +
> > +    Indices[Level] += 1;
> > +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> > +
> > +  }
> > +
> > +  //
> > +  // Update the maximum address of freed page which can be used for
> memory
> > +  // promotion upon out-of-memory-space.
> > +  //
> > +  GetLastGuardedFreePageAddress (&Address);
> > +  if (Address != 0) {
> > +    mLastPromotedPage = Address;
> > +  }
> > +}
> > +
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  )
> > +{
> > +  EFI_PHYSICAL_ADDRESS        EndAddress;
> > +  UINT64                      Bitmap;
> > +  INTN                        Pages;
> > +
> > +  if (!IsUafEnabled () ||
> > +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> > +    return;
> > +  }
> > +
> > +  Bitmap = 0;
> > +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
> >PhysicalStart);
> > +  Pages -= MemoryMapEntry->NumberOfPages;
> > +  while (Pages > 0) {
> > +    if (Bitmap == 0) {
> > +      EndAddress = MemoryMapEntry->PhysicalStart +
> > +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> > +      Bitmap = GetGuardedMemoryBits(EndAddress,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    if ((Bitmap & 1) == 0) {
> > +      break;
> > +    }
> > +
> > +    Pages--;
> > +    MemoryMapEntry->NumberOfPages++;
> > +    Bitmap = RShiftU64 (Bitmap, 1);
> > +  }
> > +}
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Use-After-Free detection makes use of 'Used then throw away' way to
> detect
> > +  any illegal access to freed memory. The thrown-away memory will be
> marked as
> > +  not-present so that any access to those memory (after free) will trigger
> > +  page-fault.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  )
> > +{
> > +  EFI_STATUS              Status;
> > +  UINTN                   AvailablePages;
> > +  UINT64                  Bitmap;
> > +  EFI_PHYSICAL_ADDRESS    Start;
> > +
> > +  if (!IsUafEnabled ()) {
> > +    return FALSE;
> > +  }
> > +
> > +  //
> > +  // Similar to memory allocation service, always search the freed pages in
> > +  // descending direction.
> > +  //
> > +  Start           = mLastPromotedPage;
> > +  AvailablePages  = 0;
> > +  while (AvailablePages == 0) {
> > +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    //
> > +    // If the address wraps around, try the really freed pages at top.
> > +    //
> > +    if (Start > mLastPromotedPage) {
> > +      GetLastGuardedFreePageAddress (&Start);
> > +      ASSERT (Start != 0);
> > +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    }
> > +
> > +    Bitmap = GetGuardedMemoryBits(Start,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > +    while (Bitmap > 0) {
> > +      if ((Bitmap & 1) != 0) {
> > +        ++AvailablePages;
> > +      } else if (AvailablePages == 0) {
> > +        Start += EFI_PAGES_TO_SIZE (1);
> > +      } else {
> > +        break;
> > +      }
> > +
> > +      Bitmap = RShiftU64(Bitmap, 1);
> > +    }
> > +  }
> > +
> > +  if (AvailablePages) {
> > +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start,
> AvailablePages));
> > +    if (gCpu != NULL) {
> > +      //
> > +      // Set flag to make sure allocating memory without GUARD for page table
> > +      // operation; otherwise infinite loops could be caused.
> > +      //
> > +      mOnGuarding = TRUE;
> > +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
> EFI_PAGES_TO_SIZE(AvailablePages), 0);
> > +      ASSERT_EFI_ERROR (Status);
> > +      mOnGuarding = FALSE;
> > +    }
> > +
> > +    mLastPromotedPage = Start;
> > +    *StartAddress     = Start;
> > +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> > +    return TRUE;
> > +  }
> > +
> > +  return FALSE;
> > +}
> > +
> >  /**
> >    Notify function used to set all Guard pages before CPU Arch Protocol
> installed.
> >  **/
> > @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
> >    )
> >  {
> >    ASSERT (gCpu != NULL);
> > -  SetAllGuardPages ();
> > +
> > +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
> > +    SetAllGuardPages ();
> > +  }
> > +
> > +  if (IsUafEnabled ()) {
> > +    GuardAllFreedPages ();
> 
> But GuardAllFreedPages() is only called dependent on IsUafEnabled().
> 
> This means that the internal IsUafEnabled() call, in GuardFreedPages(),
> is superfluous -- at least when GuardFreedPages() is called from
> GuardAllFreedPages().
> 
> There are other places that call GuardFreedPages() too, and from those
> call sites, the internal IsUafEnabled() does seem necessary.
> 
> This is very confusing when someone would like to confirm that this
> patch "does nothing" when the PCD is clear.
> 
> I suggest the following improvement:
> 
> - Make IsUafEnabled() a robust checker function. It should be possible
> to call from any context, and it should only trip an assert (regarding
> the heap guard) when the UAF detection is in fact enabled.
> 
> - Introduce two variants of GuardFreedPages(). The first variant should
> *assume* that UAF detection is enabled. GuardAllFreedPages() should call
> this 1st variant. The 2nd variant should check UAF explicitly, and then
> call the 1st variant.
> 
> Basically, when I read through this patch, I should be able to mentally
> replace some function calls with constant FALSE.
> 
> Thanks
> Laszlo
> 
> 

Thanks for the thorough review. You give me some good suggestions. I'll
update the code to make it more clean.

> 
> > +  }
> >  }
> >
> >  /**
> > @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
> >    CHAR8     *Ruler1;
> >    CHAR8     *Ruler2;
> >
> > +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
> > +    return;
> > +  }
> > +
> > +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
> > +    return;
> > +  }
> > +
> >    if (mGuardedMemoryMap == 0 ||
> >        mMapLevel == 0 ||
> >        mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > index 8c34692439..3f2a5283c2 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
> >    VOID
> >    );
> >
> > +/**
> > +  This function checks to see if the given memory map descriptor in a memory
> map
> > +  can be merged with any guarded free pages.
> > +
> > +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> > +  @param  MaxAddress        Maximum address to stop the merge.
> > +
> > +  @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> > +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> > +  );
> > +
> > +/**
> > +  Record freed pages as well as mark them as not-present, if possible.
> > +
> > +  @param[in]  BaseAddress   Base address of just freed pages.
> > +  @param[in]  Pages         Number of freed pages.
> > +
> > +  @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> > +  IN  UINTN                   Pages
> > +  );
> > +
> > +/**
> > +  Check to see if the use-after-free (UAF) feature is enabled or not.
> > +
> > +  @return TRUE/FALSE.
> > +**/
> > +BOOLEAN
> > +IsUafEnabled (
> > +  VOID
> > +  );
> > +
> > +/**
> > +  Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > +  Use-After-Free detection makes use of 'Used then throw away' way to
> detect
> > +  any illegal access to freed memory. The thrown-away memory will be
> marked as
> > +  not-present so that any access to those memory (after free) will trigger
> > +  page-fault.
> > +
> > +  The problem is that this will consume lots of memory space. Once no
> memory
> > +  left in pool to allocate, we have to restore part of the freed pages to their
> > +  normal function. Otherwise the whole system will stop functioning.
> > +
> > +  @param  StartAddress    Start address of promoted memory.
> > +  @param  EndAddress      End address of promoted memory.
> > +
> > +  @return TRUE    Succeeded to promote memory.
> > +  @return FALSE   No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> > +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> > +  );
> > +
> >  extern BOOLEAN mOnGuarding;
> >
> >  #endif
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
> b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > index 3b4cc08e7c..9f4419fa5b 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > @@ -401,9 +401,11 @@ PromoteMemoryResource (
> >    VOID
> >    )
> >  {
> > -  LIST_ENTRY         *Link;
> > -  EFI_GCD_MAP_ENTRY  *Entry;
> > -  BOOLEAN            Promoted;
> > +  LIST_ENTRY            *Link;
> > +  EFI_GCD_MAP_ENTRY     *Entry;
> > +  BOOLEAN               Promoted;
> > +  EFI_PHYSICAL_ADDRESS  StartAddress;
> > +  EFI_PHYSICAL_ADDRESS  EndAddress;
> >
> >    DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
> >
> > @@ -451,6 +453,24 @@ PromoteMemoryResource (
> >
> >    CoreReleaseGcdMemoryLock ();
> >
> > +  if (!Promoted) {
> > +    //
> > +    // If Use-After-Free detection is enabled, we might promote pages from
> > +    // guarded free pages.
> > +    //
> > +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> > +    if (Promoted) {
> > +      CoreAcquireGcdMemoryLock ();
> > +      CoreAddRange (
> > +        EfiConventionalMemory,
> > +        StartAddress,
> > +        EndAddress,
> > +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> EFI_MEMORY_WB
> > +        );
> > +      CoreReleaseGcdMemoryLock ();
> > +    }
> > +  }
> > +
> >    return Promoted;
> >  }
> >  /**
> > @@ -896,9 +916,13 @@ CoreConvertPagesEx (
> >      }
> >
> >      //
> > -    // Add our new range in
> > +    // Add our new range in. Don't do this for freed pages if Use-After-Free
> > +    // detection is enabled.
> >      //
> > -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    if (!IsUafEnabled () || !ChangingType || MemType !=
> EfiConventionalMemory) {
> > +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > +    }
> > +
> >      if (ChangingType && (MemType == EfiConventionalMemory)) {
> >        //
> >        // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
> > @@ -1514,6 +1538,7 @@ CoreFreePages (
> >
> >    Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
> >    if (!EFI_ERROR (Status)) {
> > +    GuardFreedPages (Memory, NumberOfPages);
> >      CoreUpdateProfile (
> >        (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> >        MemoryProfileActionFreePages,
> > @@ -1908,9 +1933,7 @@ Done:
> >    *MemoryMapSize = BufferSize;
> >
> >    DEBUG_CODE (
> > -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> > -      DumpGuardedMemoryBitmap ();
> > -    }
> > +    DumpGuardedMemoryBitmap ();
> >    );
> >
> >    return Status;
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > index 1ff2061f7f..0cd74267b3 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > @@ -26,7 +26,8 @@ typedef struct {
> >  } POOL_FREE;
> >
> >
> > -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> > +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> > +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
> >  typedef struct {
> >    UINT32          Signature;
> >    UINT32          Reserved;
> > @@ -367,6 +368,7 @@ CoreAllocatePoolI (
> >    UINTN       NoPages;
> >    UINTN       Granularity;
> >    BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >    ASSERT_LOCKED (&mPoolMemoryLock);
> >
> > @@ -386,6 +388,7 @@ CoreAllocatePoolI (
> >
> >    HasPoolTail  = !(NeedGuard &&
> >                     ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
> >
> >    //
> >    // Adjusting the Size to be of proper alignment so that
> > @@ -406,7 +409,7 @@ CoreAllocatePoolI (
> >    // If allocation is over max size, just allocate pages for the request
> >    // (slow)
> >    //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
> >      if (!HasPoolTail) {
> >        Size -= sizeof (POOL_TAIL);
> >      }
> > @@ -498,7 +501,7 @@ Done:
> >      //
> >      // If we have a pool buffer, fill in the header & tail info
> >      //
> > -    Head->Signature = POOL_HEAD_SIGNATURE;
> > +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
> POOL_HEAD_SIGNATURE;
> >      Head->Size      = Size;
> >      Head->Type      = (EFI_MEMORY_TYPE) PoolType;
> >      Buffer          = Head->Data;
> > @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
> >    CoreFreePoolPages (Memory, NoPages);
> >    CoreReleaseMemoryLock ();
> >
> > +  GuardFreedPages (Memory, NoPages);
> >    ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
> >      (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
> >  }
> > @@ -685,15 +689,19 @@ CoreFreePoolI (
> >    UINTN       Granularity;
> >    BOOLEAN     IsGuarded;
> >    BOOLEAN     HasPoolTail;
> > +  BOOLEAN     PageAsPool;
> >
> >    ASSERT(Buffer != NULL);
> >    //
> >    // Get the head & tail of the pool entry
> >    //
> > -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> > +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
> >    ASSERT(Head != NULL);
> >
> > -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> > +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> > +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> > +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> > +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >      return EFI_INVALID_PARAMETER;
> >    }
> >
> > @@ -701,6 +709,7 @@ CoreFreePoolI (
> >                  IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
> >    HasPoolTail = !(IsGuarded &&
> >                    ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >
> >    if (HasPoolTail) {
> >      Tail = HEAD_TO_TAIL (Head);
> > @@ -757,7 +766,7 @@ CoreFreePoolI (
> >    //
> >    // If it's not on the list, it must be pool pages
> >    //
> > -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> > +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
> >
> >      //
> >      // Return the memory pages back to free memory
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > index 05eb4f422b..f6595c90ed 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >  #include <Guid/PropertiesTable.h>
> >
> >  #include "DxeMain.h"
> > +#include "HeapGuard.h"
> >
> >  #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
> >    ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> > @@ -205,16 +206,13 @@ MergeMemoryMap (
> >      NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry,
> DescriptorSize);
> >
> >      do {
> > -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
> >NumberOfPages));
> > +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
> >PhysicalStart);
> > +      MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry-
> >NumberOfPages));
> >        if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> > -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> > -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        if (NewMemoryMapEntry != MemoryMapEntry) {
> > -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > -        }
> > -
> > +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
> &&
> > +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> >          NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> (NextMemoryMapEntry, DescriptorSize);
> >          continue;
> >        } else {
> >


  reply	other threads:[~2018-10-22  7:34 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-19  1:50 [PATCH 0/3] Add use-after-free memory detection Jian J Wang
2018-10-19  1:50 ` [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature Jian J Wang
2018-10-19 11:27   ` Laszlo Ersek
2018-10-22  2:20   ` Zeng, Star
2018-10-19  1:50 ` [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
2018-10-19 11:45   ` Laszlo Ersek
2018-10-22  7:23     ` Wang, Jian J
2018-10-19  1:50 ` [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection Jian J Wang
2018-10-19 12:04   ` Laszlo Ersek
2018-10-22  7:34     ` Wang, Jian J [this message]
2018-10-22  2:53   ` Zeng, Star
2018-10-22  7:12     ` Wang, Jian J
2018-10-22  8:23       ` Zeng, Star
2018-10-23  1:24         ` Wang, Jian J
2018-10-23  3:14           ` Zeng, Star
2018-10-19  1:56 ` [PATCH 0/3] Add " Wang, Jian J

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D827630B58408649ACB04F44C510003624E85C1C@SHSMSX103.ccr.corp.intel.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox