From: "Wang, Jian J" <jian.j.wang@intel.com>
To: "Zeng, Star" <star.zeng@intel.com>,
"edk2-devel@lists.01.org" <edk2-devel@lists.01.org>
Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>,
"Ni, Ruiyu" <ruiyu.ni@intel.com>,
"Yao, Jiewen" <jiewen.yao@intel.com>,
Laszlo Ersek <lersek@redhat.com>
Subject: Re: [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature
Date: Thu, 25 Oct 2018 04:29:15 +0000 [thread overview]
Message-ID: <D827630B58408649ACB04F44C510003624E94D0D@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <e3c07f73-9bff-aeb2-9293-69f5c9ef28b9@intel.com>
Star,
Regards,
Jian
> -----Original Message-----
> From: Zeng, Star
> Sent: Thursday, October 25, 2018 11:37 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
> Subject: Re: [edk2] [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory
> guard feature
>
> On 2018/10/24 13:26, Jian J Wang wrote:
> >> v3 changes:
> >> a. Merge from #4 and #5 of v2 patch
> >> b. Coding style cleanup
> >
> > Freed-memory guard is used to detect UAF (Use-After-Free) memory issue
> > which is illegal access to memory which has been freed. The principle
> > behind is similar to heap guard feature, that is we'll turn all pool
>
> Since we also regard UAF part of heap guard feature, better to use
> "pool/page heap guard feature" instead of "heap guard feature" here.
>
You're right. I'll change it.
> I quoted a piece of code at below and have question that why it uses
> hard code Attribute parameter?
>
> + CoreAddRange (
> + EfiConventionalMemory,
> + StartAddress,
> + EndAddress,
> + EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> EFI_MEMORY_WB
> + );
>
Because I don't know the actual attributes at that time so I think it'd be
safer to add all supported ones.
> Thanks,
> Star
>
> > memory allocation to page allocation and mark them to be not-present
> > once they are freed. The freed memory block will not be added back into
> > memory pool.
> >
> > This means that, once a page is allocated and freed, it cannot be
> > re-allocated. This will bring an issue, which is that there's
> > risk that memory space will be used out. To address it, the memory
> > service add logic to put part (at most 64 pages a time) of freed pages
> > back into page pool, so that the memory service can still have memory
> > to allocate, when all memory space have been allocated once. This is
> > called memory promotion. The promoted pages are always from the eldest
> > pages which haven been freed.
> >
> > This feature brings another problem is that memory map descriptors will
> > be increased enormously (200+ -> 2000+). One of change in this patch
> > is to update MergeMemoryMap() in file PropertiesTable.c to allow merge
> > freed pages back into the memory map. Now the number can stay at around
> > 510.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Michael D Kinney <michael.d.kinney@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> > MdeModulePkg/Core/Dxe/Mem/HeapGuard.c | 409
> +++++++++++++++++++++++++-
> > MdeModulePkg/Core/Dxe/Mem/HeapGuard.h | 65 +++-
> > MdeModulePkg/Core/Dxe/Mem/Page.c | 41 ++-
> > MdeModulePkg/Core/Dxe/Mem/Pool.c | 23 +-
> > MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c | 2 +-
> > MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c | 18 +-
> > 6 files changed, 524 insertions(+), 34 deletions(-)
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > index 663f969c0d..449a022658 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> > GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> > = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> >
> > +//
> > +// Used for promoting freed but not used pages.
> > +//
> > +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
> mLastPromotedPage = BASE_4GB;
> > +
> > /**
> > Set corresponding bits in bitmap table to 1 according to the address.
> >
> > @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
> >
> > @return An integer containing the guarded memory bitmap.
> > **/
> > -UINTN
> > +UINT64
> > GetGuardedMemoryBits (
> > IN EFI_PHYSICAL_ADDRESS Address,
> > IN UINTN NumberOfPages
> > @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
> > {
> > UINT64 *BitMap;
> > UINTN Bits;
> > - UINTN Result;
> > + UINT64 Result;
> > UINTN Shift;
> > UINTN BitsToUnitEnd;
> >
> > @@ -660,15 +665,16 @@ IsPageTypeToGuard (
> > /**
> > Check to see if the heap guard is enabled for page and/or pool allocation.
> >
> > + @param[in] GuardType Specify the sub-type(s) of Heap Guard.
> > +
> > @return TRUE/FALSE.
> > **/
> > BOOLEAN
> > IsHeapGuardEnabled (
> > - VOID
> > + UINT8 GuardType
> > )
> > {
> > - return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> > - GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
> > + return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> GuardType);
> > }
> >
> > /**
> > @@ -1203,6 +1209,380 @@ SetAllGuardPages (
> > }
> > }
> >
> > +/**
> > + Find the address of top-most guarded free page.
> > +
> > + @param[out] Address Start address of top-most guarded free page.
> > +
> > + @return VOID.
> > +**/
> > +VOID
> > +GetLastGuardedFreePageAddress (
> > + OUT EFI_PHYSICAL_ADDRESS *Address
> > + )
> > +{
> > + EFI_PHYSICAL_ADDRESS AddressGranularity;
> > + EFI_PHYSICAL_ADDRESS BaseAddress;
> > + UINTN Level;
> > + UINT64 Map;
> > + INTN Index;
> > +
> > + ASSERT (mMapLevel >= 1);
> > +
> > + BaseAddress = 0;
> > + Map = mGuardedMemoryMap;
> > + for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > + Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> > + ++Level) {
> > + AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> > +
> > + //
> > + // Find the non-NULL entry at largest index.
> > + //
> > + for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> > + if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> > + BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> > + Map = ((UINT64 *)(UINTN)Map)[Index];
> > + break;
> > + }
> > + }
> > + }
> > +
> > + //
> > + // Find the non-zero MSB then get the page address.
> > + //
> > + while (Map != 0) {
> > + Map = RShiftU64 (Map, 1);
> > + BaseAddress += EFI_PAGES_TO_SIZE (1);
> > + }
> > +
> > + *Address = BaseAddress;
> > +}
> > +
> > +/**
> > + Record freed pages.
> > +
> > + @param[in] BaseAddress Base address of just freed pages.
> > + @param[in] Pages Number of freed pages.
> > +
> > + @return VOID.
> > +**/
> > +VOID
> > +MarkFreedPages (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINTN Pages
> > + )
> > +{
> > + SetGuardedMemoryBits (BaseAddress, Pages);
> > +}
> > +
> > +/**
> > + Record freed pages as well as mark them as not-present.
> > +
> > + @param[in] BaseAddress Base address of just freed pages.
> > + @param[in] Pages Number of freed pages.
> > +
> > + @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPages (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINTN Pages
> > + )
> > +{
> > + EFI_STATUS Status;
> > +
> > + //
> > + // Legacy memory lower than 1MB might be accessed with no allocation.
> Leave
> > + // them alone.
> > + //
> > + if (BaseAddress < BASE_1MB) {
> > + return;
> > + }
> > +
> > + MarkFreedPages (BaseAddress, Pages);
> > + if (gCpu != NULL) {
> > + //
> > + // Set flag to make sure allocating memory without GUARD for page table
> > + // operation; otherwise infinite loops could be caused.
> > + //
> > + mOnGuarding = TRUE;
> > + //
> > + // Note: This might overwrite other attributes needed by other features,
> > + // such as NX memory protection.
> > + //
> > + Status = gCpu->SetMemoryAttributes (
> > + gCpu,
> > + BaseAddress,
> > + EFI_PAGES_TO_SIZE (Pages),
> > + EFI_MEMORY_RP
> > + );
> > + //
> > + // Normally we should ASSERT the returned Status. But there might be
> memory
> > + // alloc/free involved in SetMemoryAttributes(), which might fail this
> > + // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
> > + //
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%lu)\n",
> BaseAddress, (UINT64)Pages));
> > + }
> > + mOnGuarding = FALSE;
> > + }
> > +}
> > +
> > +/**
> > + Record freed pages as well as mark them as not-present, if enabled.
> > +
> > + @param[in] BaseAddress Base address of just freed pages.
> > + @param[in] Pages Number of freed pages.
> > +
> > + @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPagesChecked (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINTN Pages
> > + )
> > +{
> > + if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > + GuardFreedPages (BaseAddress, Pages);
> > + }
> > +}
> > +
> > +/**
> > + Mark all pages freed before CPU Arch Protocol as not-present.
> > +
> > +**/
> > +VOID
> > +GuardAllFreedPages (
> > + VOID
> > + )
> > +{
> > + UINTN Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINTN Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINTN Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINT64 Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINT64 Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINT64 TableEntry;
> > + UINT64 Address;
> > + UINT64 GuardPage;
> > + INTN Level;
> > + UINTN BitIndex;
> > + UINTN GuardPageNumber;
> > +
> > + if (mGuardedMemoryMap == 0 ||
> > + mMapLevel == 0 ||
> > + mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > + return;
> > + }
> > +
> > + CopyMem (Entries, mLevelMask, sizeof (Entries));
> > + CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> > +
> > + SetMem (Tables, sizeof(Tables), 0);
> > + SetMem (Addresses, sizeof(Addresses), 0);
> > + SetMem (Indices, sizeof(Indices), 0);
> > +
> > + Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > + Tables[Level] = mGuardedMemoryMap;
> > + Address = 0;
> > + GuardPage = (UINT64)-1;
> > + GuardPageNumber = 0;
> > +
> > + while (TRUE) {
> > + if (Indices[Level] > Entries[Level]) {
> > + Tables[Level] = 0;
> > + Level -= 1;
> > + } else {
> > + TableEntry = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> > + Address = Addresses[Level];
> > +
> > + if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > + Level += 1;
> > + Tables[Level] = TableEntry;
> > + Addresses[Level] = Address;
> > + Indices[Level] = 0;
> > +
> > + continue;
> > + } else {
> > + BitIndex = 1;
> > + while (BitIndex != 0) {
> > + if ((TableEntry & BitIndex) != 0) {
> > + if (GuardPage == (UINT64)-1) {
> > + GuardPage = Address;
> > + }
> > + ++GuardPageNumber;
> > + } else if (GuardPageNumber > 0) {
> > + GuardFreedPages (GuardPage, GuardPageNumber);
> > + GuardPageNumber = 0;
> > + GuardPage = (UINT64)-1;
> > + }
> > +
> > + if (TableEntry == 0) {
> > + break;
> > + }
> > +
> > + Address += EFI_PAGES_TO_SIZE (1);
> > + BitIndex = LShiftU64 (BitIndex, 1);
> > + }
> > + }
> > + }
> > +
> > + if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > + break;
> > + }
> > +
> > + Indices[Level] += 1;
> > + Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > + Addresses[Level] = Address | LShiftU64 (Indices[Level], Shifts[Level]);
> > +
> > + }
> > +
> > + //
> > + // Update the maximum address of freed page which can be used for
> memory
> > + // promotion upon out-of-memory-space.
> > + //
> > + GetLastGuardedFreePageAddress (&Address);
> > + if (Address != 0) {
> > + mLastPromotedPage = Address;
> > + }
> > +}
> > +
> > +/**
> > + This function checks to see if the given memory map descriptor in a memory
> map
> > + can be merged with any guarded free pages.
> > +
> > + @param MemoryMapEntry A pointer to a descriptor in MemoryMap.
> > + @param MaxAddress Maximum address to stop the merge.
> > +
> > + @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > + IN EFI_MEMORY_DESCRIPTOR *MemoryMapEntry,
> > + IN EFI_PHYSICAL_ADDRESS MaxAddress
> > + )
> > +{
> > + EFI_PHYSICAL_ADDRESS EndAddress;
> > + UINT64 Bitmap;
> > + INTN Pages;
> > +
> > + if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> > + MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> > + return;
> > + }
> > +
> > + Bitmap = 0;
> > + Pages = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
> >PhysicalStart);
> > + Pages -= MemoryMapEntry->NumberOfPages;
> > + while (Pages > 0) {
> > + if (Bitmap == 0) {
> > + EndAddress = MemoryMapEntry->PhysicalStart +
> > + EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> > + Bitmap = GetGuardedMemoryBits (EndAddress,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > + }
> > +
> > + if ((Bitmap & 1) == 0) {
> > + break;
> > + }
> > +
> > + Pages--;
> > + MemoryMapEntry->NumberOfPages++;
> > + Bitmap = RShiftU64 (Bitmap, 1);
> > + }
> > +}
> > +
> > +/**
> > + Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > + Freed memory guard is used to detect Use-After-Free (UAF) memory issue,
> which
> > + makes use of 'Used then throw away' way to detect any illegal access to
> freed
> > + memory. The thrown-away memory will be marked as not-present so that
> any access
> > + to those memory (after free) will be caught by page-fault exception.
> > +
> > + The problem is that this will consume lots of memory space. Once no
> memory
> > + left in pool to allocate, we have to restore part of the freed pages to their
> > + normal function. Otherwise the whole system will stop functioning.
> > +
> > + @param StartAddress Start address of promoted memory.
> > + @param EndAddress End address of promoted memory.
> > +
> > + @return TRUE Succeeded to promote memory.
> > + @return FALSE No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > + OUT EFI_PHYSICAL_ADDRESS *StartAddress,
> > + OUT EFI_PHYSICAL_ADDRESS *EndAddress
> > + )
> > +{
> > + EFI_STATUS Status;
> > + UINTN AvailablePages;
> > + UINT64 Bitmap;
> > + EFI_PHYSICAL_ADDRESS Start;
> > +
> > + if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > + return FALSE;
> > + }
> > +
> > + //
> > + // Similar to memory allocation service, always search the freed pages in
> > + // descending direction.
> > + //
> > + Start = mLastPromotedPage;
> > + AvailablePages = 0;
> > + while (AvailablePages == 0) {
> > + Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> > + //
> > + // If the address wraps around, try the really freed pages at top.
> > + //
> > + if (Start > mLastPromotedPage) {
> > + GetLastGuardedFreePageAddress (&Start);
> > + ASSERT (Start != 0);
> > + Start -= EFI_PAGES_TO_SIZE (GUARDED_HEAP_MAP_ENTRY_BITS);
> > + }
> > +
> > + Bitmap = GetGuardedMemoryBits (Start,
> GUARDED_HEAP_MAP_ENTRY_BITS);
> > + while (Bitmap > 0) {
> > + if ((Bitmap & 1) != 0) {
> > + ++AvailablePages;
> > + } else if (AvailablePages == 0) {
> > + Start += EFI_PAGES_TO_SIZE (1);
> > + } else {
> > + break;
> > + }
> > +
> > + Bitmap = RShiftU64 (Bitmap, 1);
> > + }
> > + }
> > +
> > + if (AvailablePages) {
> > + DEBUG ((DEBUG_INFO, "Promoted pages: %lX (%lx)\r\n", Start,
> (UINT64)AvailablePages));
> > + ClearGuardedMemoryBits (Start, AvailablePages);
> > +
> > + if (gCpu != NULL) {
> > + //
> > + // Set flag to make sure allocating memory without GUARD for page table
> > + // operation; otherwise infinite loops could be caused.
> > + //
> > + mOnGuarding = TRUE;
> > + Status = gCpu->SetMemoryAttributes (gCpu, Start,
> EFI_PAGES_TO_SIZE(AvailablePages), 0);
> > + ASSERT_EFI_ERROR (Status);
> > + mOnGuarding = FALSE;
> > + }
> > +
> > + mLastPromotedPage = Start;
> > + *StartAddress = Start;
> > + *EndAddress = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> > + return TRUE;
> > + }
> > +
> > + return FALSE;
> > +}
> > +
> > /**
> > Notify function used to set all Guard pages before CPU Arch Protocol
> installed.
> > **/
> > @@ -1212,7 +1592,20 @@ HeapGuardCpuArchProtocolNotify (
> > )
> > {
> > ASSERT (gCpu != NULL);
> > - SetAllGuardPages ();
> > +
> > + if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL) &&
> > + IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > + DEBUG ((DEBUG_ERROR, "Heap guard and freed memory guard cannot be
> enabled at the same time.\n"));
> > + CpuDeadLoop ();
> > + }
> > +
> > + if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> > + SetAllGuardPages ();
> > + }
> > +
> > + if (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)) {
> > + GuardAllFreedPages ();
> > + }
> > }
> >
> > /**
> > @@ -1264,6 +1657,10 @@ DumpGuardedMemoryBitmap (
> > CHAR8 *Ruler1;
> > CHAR8 *Ruler2;
> >
> > + if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_ALL)) {
> > + return;
> > + }
> > +
> > if (mGuardedMemoryMap == 0 ||
> > mMapLevel == 0 ||
> > mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > index 8c34692439..55a91ec098 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > @@ -1,7 +1,7 @@
> > /** @file
> > Data type, macros and function prototypes of heap guard feature.
> >
> > -Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2017-2018, Intel Corporation. All rights reserved.<BR>
> > This program and the accompanying materials
> > are licensed and made available under the terms and conditions of the BSD
> License
> > which accompanies this distribution. The full text of the license may be found
> at
> > @@ -160,6 +160,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
> ANY KIND, EITHER EXPRESS OR IMPLIED.
> > //
> > #define GUARD_HEAP_TYPE_PAGE BIT0
> > #define GUARD_HEAP_TYPE_POOL BIT1
> > +#define GUARD_HEAP_TYPE_FREED BIT4
> > +#define GUARD_HEAP_TYPE_ALL \
> > +
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_FR
> EED)
> >
> > //
> > // Debug message level
> > @@ -392,11 +395,13 @@ AdjustPoolHeadF (
> > /**
> > Check to see if the heap guard is enabled for page and/or pool allocation.
> >
> > + @param[in] GuardType Specify the sub-type(s) of Heap Guard.
> > +
> > @return TRUE/FALSE.
> > **/
> > BOOLEAN
> > IsHeapGuardEnabled (
> > - VOID
> > + UINT8 GuardType
> > );
> >
> > /**
> > @@ -407,6 +412,62 @@ HeapGuardCpuArchProtocolNotify (
> > VOID
> > );
> >
> > +/**
> > + This function checks to see if the given memory map descriptor in a memory
> map
> > + can be merged with any guarded free pages.
> > +
> > + @param MemoryMapEntry A pointer to a descriptor in MemoryMap.
> > + @param MaxAddress Maximum address to stop the merge.
> > +
> > + @return VOID
> > +
> > +**/
> > +VOID
> > +MergeGuardPages (
> > + IN EFI_MEMORY_DESCRIPTOR *MemoryMapEntry,
> > + IN EFI_PHYSICAL_ADDRESS MaxAddress
> > + );
> > +
> > +/**
> > + Record freed pages as well as mark them as not-present, if enabled.
> > +
> > + @param[in] BaseAddress Base address of just freed pages.
> > + @param[in] Pages Number of freed pages.
> > +
> > + @return VOID.
> > +**/
> > +VOID
> > +EFIAPI
> > +GuardFreedPagesChecked (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINTN Pages
> > + );
> > +
> > +/**
> > + Put part (at most 64 pages a time) guarded free pages back to free page
> pool.
> > +
> > + Freed memory guard is used to detect Use-After-Free (UAF) memory issue,
> which
> > + makes use of 'Used then throw away' way to detect any illegal access to
> freed
> > + memory. The thrown-away memory will be marked as not-present so that
> any access
> > + to those memory (after free) will be caught by page-fault exception.
> > +
> > + The problem is that this will consume lots of memory space. Once no
> memory
> > + left in pool to allocate, we have to restore part of the freed pages to their
> > + normal function. Otherwise the whole system will stop functioning.
> > +
> > + @param StartAddress Start address of promoted memory.
> > + @param EndAddress End address of promoted memory.
> > +
> > + @return TRUE Succeeded to promote memory.
> > + @return FALSE No free memory found.
> > +
> > +**/
> > +BOOLEAN
> > +PromoteGuardedFreePages (
> > + OUT EFI_PHYSICAL_ADDRESS *StartAddress,
> > + OUT EFI_PHYSICAL_ADDRESS *EndAddress
> > + );
> > +
> > extern BOOLEAN mOnGuarding;
> >
> > #endif
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
> b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > index 3b4cc08e7c..9d9abcf565 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> > @@ -401,9 +401,11 @@ PromoteMemoryResource (
> > VOID
> > )
> > {
> > - LIST_ENTRY *Link;
> > - EFI_GCD_MAP_ENTRY *Entry;
> > - BOOLEAN Promoted;
> > + LIST_ENTRY *Link;
> > + EFI_GCD_MAP_ENTRY *Entry;
> > + BOOLEAN Promoted;
> > + EFI_PHYSICAL_ADDRESS StartAddress;
> > + EFI_PHYSICAL_ADDRESS EndAddress;
> >
> > DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
> >
> > @@ -451,6 +453,24 @@ PromoteMemoryResource (
> >
> > CoreReleaseGcdMemoryLock ();
> >
> > + if (!Promoted) {
> > + //
> > + // If freed-memory guard is enabled, we could promote pages from
> > + // guarded free pages.
> > + //
> > + Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> > + if (Promoted) {
> > + CoreAcquireGcdMemoryLock ();
> > + CoreAddRange (
> > + EfiConventionalMemory,
> > + StartAddress,
> > + EndAddress,
> > + EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> EFI_MEMORY_WB
> > + );
> > + CoreReleaseGcdMemoryLock ();
> > + }
> > + }
> > +
> > return Promoted;
> > }
> > /**
> > @@ -896,9 +916,15 @@ CoreConvertPagesEx (
> > }
> >
> > //
> > - // Add our new range in
> > + // Add our new range in. Don't do this for freed pages if freed-memory
> > + // guard is enabled.
> > //
> > - CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > + if (!IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED) ||
> > + !ChangingType ||
> > + MemType != EfiConventionalMemory) {
> > + CoreAddRange (MemType, Start, RangeEnd, Attribute);
> > + }
> > +
> > if (ChangingType && (MemType == EfiConventionalMemory)) {
> > //
> > // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
> this
> > @@ -1514,6 +1540,7 @@ CoreFreePages (
> >
> > Status = CoreInternalFreePages (Memory, NumberOfPages, &MemoryType);
> > if (!EFI_ERROR (Status)) {
> > + GuardFreedPagesChecked (Memory, NumberOfPages);
> > CoreUpdateProfile (
> > (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> > MemoryProfileActionFreePages,
> > @@ -1908,9 +1935,7 @@ Done:
> > *MemoryMapSize = BufferSize;
> >
> > DEBUG_CODE (
> > - if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> > - DumpGuardedMemoryBitmap ();
> > - }
> > + DumpGuardedMemoryBitmap ();
> > );
> >
> > return Status;
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > index 1ff2061f7f..b9182ea807 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> > @@ -1,7 +1,7 @@
> > /** @file
> > UEFI Memory pool management functions.
> >
> > -Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
> > This program and the accompanying materials
> > are licensed and made available under the terms and conditions of the BSD
> License
> > which accompanies this distribution. The full text of the license may be found
> at
> > @@ -26,7 +26,8 @@ typedef struct {
> > } POOL_FREE;
> >
> >
> > -#define POOL_HEAD_SIGNATURE SIGNATURE_32('p','h','d','0')
> > +#define POOL_HEAD_SIGNATURE SIGNATURE_32('p','h','d','0')
> > +#define POOLPAGE_HEAD_SIGNATURE SIGNATURE_32('p','h','d','1')
> > typedef struct {
> > UINT32 Signature;
> > UINT32 Reserved;
> > @@ -367,6 +368,7 @@ CoreAllocatePoolI (
> > UINTN NoPages;
> > UINTN Granularity;
> > BOOLEAN HasPoolTail;
> > + BOOLEAN PageAsPool;
> >
> > ASSERT_LOCKED (&mPoolMemoryLock);
> >
> > @@ -386,6 +388,7 @@ CoreAllocatePoolI (
> >
> > HasPoolTail = !(NeedGuard &&
> > ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > + PageAsPool = (IsHeapGuardEnabled (GUARD_HEAP_TYPE_FREED)
> && !mOnGuarding);
> >
> > //
> > // Adjusting the Size to be of proper alignment so that
> > @@ -406,7 +409,7 @@ CoreAllocatePoolI (
> > // If allocation is over max size, just allocate pages for the request
> > // (slow)
> > //
> > - if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> > + if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
> > if (!HasPoolTail) {
> > Size -= sizeof (POOL_TAIL);
> > }
> > @@ -498,7 +501,7 @@ Done:
> > //
> > // If we have a pool buffer, fill in the header & tail info
> > //
> > - Head->Signature = POOL_HEAD_SIGNATURE;
> > + Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
> POOL_HEAD_SIGNATURE;
> > Head->Size = Size;
> > Head->Type = (EFI_MEMORY_TYPE) PoolType;
> > Buffer = Head->Data;
> > @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
> > CoreFreePoolPages (Memory, NoPages);
> > CoreReleaseMemoryLock ();
> >
> > + GuardFreedPagesChecked (Memory, NoPages);
> > ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
> > (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
> > }
> > @@ -685,15 +689,19 @@ CoreFreePoolI (
> > UINTN Granularity;
> > BOOLEAN IsGuarded;
> > BOOLEAN HasPoolTail;
> > + BOOLEAN PageAsPool;
> >
> > ASSERT(Buffer != NULL);
> > //
> > // Get the head & tail of the pool entry
> > //
> > - Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> > + Head = BASE_CR (Buffer, POOL_HEAD, Data);
> > ASSERT(Head != NULL);
> >
> > - if (Head->Signature != POOL_HEAD_SIGNATURE) {
> > + if (Head->Signature != POOL_HEAD_SIGNATURE &&
> > + Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> > + ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> > + Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> > return EFI_INVALID_PARAMETER;
> > }
> >
> > @@ -701,6 +709,7 @@ CoreFreePoolI (
> > IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
> > HasPoolTail = !(IsGuarded &&
> > ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > + PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >
> > if (HasPoolTail) {
> > Tail = HEAD_TO_TAIL (Head);
> > @@ -757,7 +766,7 @@ CoreFreePoolI (
> > //
> > // If it's not on the list, it must be pool pages
> > //
> > - if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> > + if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
> >
> > //
> > // Return the memory pages back to free memory
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > index fa8f8fe91a..6298b67db1 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > @@ -1250,7 +1250,7 @@ ApplyMemoryProtectionPolicy (
> > // Don't overwrite Guard pages, which should be the first and/or last page,
> > // if any.
> > //
> > - if (IsHeapGuardEnabled ()) {
> > + if (IsHeapGuardEnabled
> (GUARD_HEAP_TYPE_PAGE|GUARD_HEAP_TYPE_POOL)) {
> > if (IsGuardPage (Memory)) {
> > Memory += EFI_PAGE_SIZE;
> > Length -= EFI_PAGE_SIZE;
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > index 05eb4f422b..04cfb2dab2 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> > @@ -1,7 +1,7 @@
> > /** @file
> > UEFI PropertiesTable support
> >
> > -Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR>
> > This program and the accompanying materials
> > are licensed and made available under the terms and conditions of the BSD
> License
> > which accompanies this distribution. The full text of the license may be found
> at
> > @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> > #include <Guid/PropertiesTable.h>
> >
> > #include "DxeMain.h"
> > +#include "HeapGuard.h"
> >
> > #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
> > ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> > @@ -205,16 +206,13 @@ MergeMemoryMap (
> > NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR (MemoryMapEntry,
> DescriptorSize);
> >
> > do {
> > - MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
> >NumberOfPages));
> > + MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
> >PhysicalStart);
> > + MemoryBlockLength = (UINT64) (EfiPagesToSize (NewMemoryMapEntry-
> >NumberOfPages));
> > if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> > - (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > - (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute) &&
> > - ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > - MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > - if (NewMemoryMapEntry != MemoryMapEntry) {
> > - NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > - }
> > -
> > + (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> > + (NewMemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
> &&
> > + ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> NextMemoryMapEntry->PhysicalStart)) {
> > + NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >NumberOfPages;
> > NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> (NextMemoryMapEntry, DescriptorSize);
> > continue;
> > } else {
> >
next prev parent reply other threads:[~2018-10-25 4:32 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-24 5:26 [PATCH v3 0/6] Introduce freed-memory guard feature Jian J Wang
2018-10-24 5:26 ` [PATCH v3 1/6] MdeModulePkg: cleanup Heap Guard pool/page type PCD documentation Jian J Wang
2018-10-25 2:55 ` Zeng, Star
2018-10-25 4:21 ` Wang, Jian J
2018-10-24 5:26 ` [PATCH v3 2/6] MdeModulePkg: introduce UEFI freed-memory guard bit in HeapGuard PCD Jian J Wang
2018-10-25 3:02 ` Zeng, Star
2018-10-25 4:23 ` Wang, Jian J
2018-10-24 5:26 ` [PATCH v3 3/6] UefiCpuPkg/CpuDxe: consider freed-memory guard in non-stop mode Jian J Wang
2018-10-24 5:26 ` [PATCH v3 4/6] UefiCpuPkg/CpuDxe: prevent recursive calling of InitializePageTablePool Jian J Wang
2018-10-24 5:26 ` [PATCH v3 5/6] MdeModulePkg/Core: prevent re-acquire GCD memory lock Jian J Wang
2018-10-25 3:22 ` Zeng, Star
2018-10-25 4:24 ` Wang, Jian J
2018-10-24 5:26 ` [PATCH v3 6/6] MdeModulePkg/Core: add freed-memory guard feature Jian J Wang
2018-10-25 3:37 ` Zeng, Star
2018-10-25 4:29 ` Wang, Jian J [this message]
2018-10-25 6:28 ` Wang, Jian J
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D827630B58408649ACB04F44C510003624E94D0D@SHSMSX103.ccr.corp.intel.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox