From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=134.134.136.65; helo=mga03.intel.com; envelope-from=ruiyu.ni@intel.com; receiver=edk2-devel@lists.01.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 85A69223DB0F5 for ; Wed, 14 Mar 2018 20:40:30 -0700 (PDT) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Mar 2018 20:46:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,308,1517904000"; d="scan'208";a="24658487" Received: from ray-dev.ccr.corp.intel.com (HELO [10.239.9.4]) ([10.239.9.4]) by fmsmga008.fm.intel.com with ESMTP; 14 Mar 2018 20:46:52 -0700 To: Jian J Wang , edk2-devel@lists.01.org Cc: Jiewen Yao , Eric Dong , Star Zeng References: <20180315022750.15216-1-jian.j.wang@intel.com> From: "Ni, Ruiyu" Message-ID: Date: Thu, 15 Mar 2018 11:46:51 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180315022750.15216-1-jian.j.wang@intel.com> Subject: Re: [PATCH] MdeModulePkg/Core: allow HeapGuard even before CpuArchProtocol installed X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Mar 2018 03:40:31 -0000 Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit On 3/15/2018 10:27 AM, Jian J Wang wrote: >> v2 changes: >> Fix a logic hole in bits operation on address on 64K boundary with >> just 64-bit length (SetBits(), ClearBits(), GetBits()). > > Due to the fact that HeapGuard needs CpuArchProtocol to update page > attributes, the feature is normally enabled after CpuArchProtocol is > installed. Since there're some drivers are loaded before CpuArchProtocl, > they cannot make use HeapGuard feature to detect potential issues. > > This patch fixes above situation by updating the DXE core to skip the > NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding > NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make > sure that they can be called but do nothing. This will allow HeapGuard to > record all guarded memory without setting the related Guard pages to not- > present. > > Once the CpuArchProtocol is installed, a protocol notify will be called > to complete the work of setting Guard pages to not-present. > > Please note that above changes will cause a #PF in GCD code during cleanup > of map entries, which is initiated by CpuDxe driver to update real mtrr > and paging attributes back to GCD. During that time, CpuDxe doesn't allow > GCD to update memory attributes and then any Guard page cannot be unset. > As a result, this will prevent Guarded memory from freeing during memory > map cleanup. > > The solution is to avoid allocating guarded memory as memory map entries > in GCD code. It's done by setting global mOnGuarding to TRUE before memory > allocation and setting it back to FALSE afterwards in GCD function > CoreAllocateGcdMapEntry(). > > Cc: Star Zeng > Cc: Eric Dong > Cc: Jiewen Yao > Contributed-under: TianoCore Contribution Agreement 1.1 > Signed-off-by: Jian J Wang > --- > MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 10 ++ > MdeModulePkg/Core/Dxe/Mem/HeapGuard.c | 148 ++++++++++++++++++++++++-- > MdeModulePkg/Core/Dxe/Mem/HeapGuard.h | 8 ++ > MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c | 5 + > MdeModulePkg/Core/PiSmmCore/HeapGuard.c | 16 +-- > 5 files changed, 174 insertions(+), 13 deletions(-) > > diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c > index 8fbc3d282c..77f4adb4bc 100644 > --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c > +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c > @@ -16,6 +16,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > > #include "DxeMain.h" > #include "Gcd.h" > +#include "Mem/HeapGuard.h" > > #define MINIMUM_INITIAL_MEMORY_SIZE 0x10000 > > @@ -391,12 +392,21 @@ CoreAllocateGcdMapEntry ( > IN OUT EFI_GCD_MAP_ENTRY **BottomEntry > ) > { > + // > + // Set to mOnGuarding to TRUE before memory allocation. This will make sure > + // that the entry memory is not "guarded" by HeapGuard. Otherwise it might > + // cause problem when it's freed (if HeapGuard is enabled). > + // > + mOnGuarding = TRUE; > *TopEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY)); > + mOnGuarding = FALSE; > if (*TopEntry == NULL) { > return EFI_OUT_OF_RESOURCES; > } > > + mOnGuarding = TRUE; > *BottomEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY)); > + mOnGuarding = FALSE; > if (*BottomEntry == NULL) { > CoreFreePool (*TopEntry); > return EFI_OUT_OF_RESOURCES; > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c > index ac043b5d9b..fd6aeee8da 100644 > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c > @@ -70,7 +70,7 @@ SetBits ( > StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address); > EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS; > > - if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) { > + if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) { > Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) % > GUARDED_HEAP_MAP_ENTRY_BITS; > Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS; > @@ -123,7 +123,7 @@ ClearBits ( > StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address); > EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS; > > - if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) { > + if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) { > Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) % > GUARDED_HEAP_MAP_ENTRY_BITS; > Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS; > @@ -188,10 +188,14 @@ GetBits ( > Lsbs = 0; > } > > - Result = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1); > - if (Lsbs > 0) { > - BitMap += 1; > - Result |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs); > + if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) { > + Result = *BitMap; > + } else { > + Result = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1); > + if (Lsbs > 0) { > + BitMap += 1; > + Result |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs); > + } > } > > return Result; > @@ -576,6 +580,10 @@ SetGuardPage ( > IN EFI_PHYSICAL_ADDRESS BaseAddress > ) > { > + if (gCpu == NULL) { > + return; > + } > + > // > // Set flag to make sure allocating memory without GUARD for page table > // operation; otherwise infinite loops could be caused. > @@ -606,6 +614,10 @@ UnsetGuardPage ( > { > UINT64 Attributes; > > + if (gCpu == NULL) { > + return; > + } > + > // > // Once the Guard page is unset, it will be freed back to memory pool. NX > // memory protection must be restored for this page if NX is enabled for free > @@ -652,7 +664,7 @@ IsMemoryTypeToGuard ( > UINT64 ConfigBit; > BOOLEAN InSmm; > > - if (gCpu == NULL || AllocateType == AllocateAddress) { > + if (AllocateType == AllocateAddress) { > return FALSE; > } > > @@ -1164,6 +1176,128 @@ CoreConvertPagesWithGuard ( > return CoreConvertPages (Start, NumberOfPages, NewType); > } > > +/** > + Set all Guard pages which cannot be set before CPU Arch Protocol installed. > +**/ > +VOID > +SetAllGuardPages ( > + VOID > + ) > +{ > + UINTN Entries[GUARDED_HEAP_MAP_TABLE_DEPTH]; > + UINTN Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH]; > + UINTN Indices[GUARDED_HEAP_MAP_TABLE_DEPTH]; > + UINT64 Tables[GUARDED_HEAP_MAP_TABLE_DEPTH]; > + UINT64 Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH]; > + UINT64 TableEntry; > + UINT64 Address; > + UINT64 GuardPage; > + INTN Level; > + UINTN Index; > + BOOLEAN OnGuarding; > + > + if (mGuardedMemoryMap == 0 || > + mMapLevel == 0 || > + mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) { > + return; > + } > + > + CopyMem (Entries, mLevelMask, sizeof (Entries)); > + CopyMem (Shifts, mLevelShift, sizeof (Shifts)); > + > + SetMem (Tables, sizeof(Tables), 0); > + SetMem (Addresses, sizeof(Addresses), 0); > + SetMem (Indices, sizeof(Indices), 0); > + > + Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel; > + Tables[Level] = mGuardedMemoryMap; > + Address = 0; > + OnGuarding = FALSE; > + > + DEBUG_CODE ( > + DumpGuardedMemoryBitmap (); > + ); > + > + while (TRUE) { > + if (Indices[Level] > Entries[Level]) { > + Tables[Level] = 0; > + Level -= 1; > + } else { > + > + TableEntry = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]]; > + Address = Addresses[Level]; > + > + if (TableEntry == 0) { > + > + OnGuarding = FALSE; > + > + } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) { > + > + Level += 1; > + Tables[Level] = TableEntry; > + Addresses[Level] = Address; > + Indices[Level] = 0; > + > + continue; > + > + } else { > + > + Index = 0; > + while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) { > + if ((TableEntry & 1) == 1) { > + if (OnGuarding) { > + GuardPage = 0; > + } else { > + GuardPage = Address - EFI_PAGE_SIZE; > + } > + OnGuarding = TRUE; > + } else { > + if (OnGuarding) { > + GuardPage = Address; > + } else { > + GuardPage = 0; > + } > + OnGuarding = FALSE; > + } > + > + if (GuardPage != 0) { > + SetGuardPage (GuardPage); > + } > + > + if (TableEntry == 0) { > + break; > + } > + > + TableEntry = RShiftU64 (TableEntry, 1); > + Address += EFI_PAGE_SIZE; > + Index += 1; > + } > + } > + } > + > + if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) { > + break; > + } > + > + Indices[Level] += 1; > + Address = (Level == 0) ? 0 : Addresses[Level - 1]; > + Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]); > + > + } > +} > + > +/** > + Notify function used to set all Guard pages before CPU Arch Protocol installed. > +**/ > +VOID > +HeapGuardCpuArchProtocolNotify ( > + VOID > + ) > +{ > + ASSERT (gCpu != NULL); > + SetAllGuardPages (); > +} > + > /** > Helper function to convert a UINT64 value in binary to a string. > > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h > index 7208ab1437..8c34692439 100644 > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h > @@ -399,6 +399,14 @@ IsHeapGuardEnabled ( > VOID > ); > > +/** > + Notify function used to set all Guard pages after CPU Arch Protocol installed. > +**/ > +VOID > +HeapGuardCpuArchProtocolNotify ( > + VOID > + ); > + > extern BOOLEAN mOnGuarding; > > #endif > diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c > index 407aece807..2f7e490af1 100644 > --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c > +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c > @@ -1001,6 +1001,11 @@ MemoryProtectionCpuArchProtocolNotify ( > InitializeDxeNxMemoryProtectionPolicy (); > } > > + // > + // Call notify function meant for Heap Guard. > + // > + HeapGuardCpuArchProtocolNotify (); > + > if (mImageProtectionPolicy == 0) { > return; > } > diff --git a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c > index 923af93de2..f9657f9baa 100644 > --- a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c > +++ b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c > @@ -73,7 +73,7 @@ SetBits ( > StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address); > EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS; > > - if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) { > + if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) { > Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) % > GUARDED_HEAP_MAP_ENTRY_BITS; > Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS; > @@ -126,7 +126,7 @@ ClearBits ( > StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address); > EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS; > > - if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) { > + if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) { > Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) % > GUARDED_HEAP_MAP_ENTRY_BITS; > Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS; > @@ -191,10 +191,14 @@ GetBits ( > Lsbs = 0; > } > > - Result = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1); > - if (Lsbs > 0) { > - BitMap += 1; > - Result |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs); > + if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) { > + Result = *BitMap; > + } else { > + Result = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1); > + if (Lsbs > 0) { > + BitMap += 1; > + Result |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs); > + } > } > > return Result; > Jian, I think at least the single patch could be split to three patches. Two of them fix the ">=" issue in DxeCore and PiSmmCore. One of them fix the heapguard gap. -- Thanks, Ray