public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Zeng, Star" <star.zeng@intel.com>
To: "Wang, Jian J" <jian.j.wang@intel.com>,
	"edk2-devel@lists.01.org" <edk2-devel@lists.01.org>
Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>,
	"Ni, Ruiyu" <ruiyu.ni@intel.com>,
	"Yao, Jiewen" <jiewen.yao@intel.com>,
	Laszlo Ersek <lersek@redhat.com>,
	star.zeng@intel.com
Subject: Re: [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection
Date: Tue, 23 Oct 2018 11:14:57 +0800	[thread overview]
Message-ID: <28767686-eeb4-6494-2b2f-3aa12099f1e8@intel.com> (raw)
In-Reply-To: <D827630B58408649ACB04F44C510003624E86337@SHSMSX103.ccr.corp.intel.com>

On 2018/10/23 9:24, Wang, Jian J wrote:
> Star,
> 
> Regards,
> Jian
> 
> 
>> -----Original Message-----
>> From: Zeng, Star
>> Sent: Monday, October 22, 2018 4:24 PM
>> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
>> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
>> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>> memory detection
>>
>> On 2018/10/22 15:12, Wang, Jian J wrote:
>>> Star,
>>>
>>> Thanks for the comment. Please see my opinions below.
>>>
>>> Regards,
>>> Jian
>>>
>>>
>>>> -----Original Message-----
>>>> From: Zeng, Star
>>>> Sent: Monday, October 22, 2018 10:53 AM
>>>> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
>>>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>>>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
>>>> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
>>>> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>>>> memory detection
>>>>
>>>> On 2018/10/19 9:50, Jian J Wang wrote:
>>>>> UAF (Use-After-Free) memory detection is new feature introduced to
>>>>> detect illegal access to memory which has been freed. The principle
>>>>> behind is similar to heap guard feature, that is the core turn all pool
>>>>> memory allocation to page allocation and mark them to be not-present
>>>>> once they are freed.
>>>>>
>>>>> This also implies that, once a page is allocated and freed, it cannot
>>>>> be re-allocated. This will bring another issue, which is that there's
>>>>> risk that memory space will be used out. To address it, the memory
>>>>> service add logic to put part (at most 64 pages a time) of freed pages
>>>>> back into page pool, so that the memory service can still have memory
>>>>> to allocate, when all memory space have been allocated once. This is
>>>>> called memory promotion. The promoted pages are always from the eldest
>>>>> pages which haven been freed.
>>>>>
>>>>> To use this feature, one can simply set following PCD to 1
>>>>>
>>>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>>>>>
>>>>> Please note this feature cannot be used with heap guard feature controlled
>>>>> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
>>>>>
>>>>> Cc: Star Zeng <star.zeng@intel.com>
>>>>> Cc: Michael D Kinney <michael.d.kinney@intel.com>
>>>>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>>>>> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
>>>>> Cc: Laszlo Ersek <lersek@redhat.com>
>>>>> Contributed-under: TianoCore Contribution Agreement 1.1
>>>>> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
>>>>
>>>> I did not review the whole patch thoroughly. But I suggest that putting
>>>> the code for UAF in a separated file if it is feasible. I am aware that
>>>> UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
>>>> HeapGuard.c and GuardPage.c (this file name is just for example)?
>>>>
>>>
>>> I think we can take the UAF as a special kind of heap guard: guarding the
>>> freed heap memory instead of guarding the allocated heap memory. So
>>> UAF is a complement of Heap Guard or part of Heap Guard. From this
>>> perspective, it's not necessary to put it in separate file.
>>
>> Very good point. From this perspective, I agree.
>> Then I would like to suggest not introducing a new PCD
>> PcdUseAfterFreeDetectionPropertyMask, but use one BIT of
>> PcdHeapGuardPropertyMask for it since we think it is part of Heap Guard.
>> The benefits are
>> 1. No confusion between Heap Guard and Use After Free.
>> 2. No need introduce new PCD.
>> 3. Can reuse BIT6 - Enable non-stop mode.
>> 4. No need update DxeIplPeim to enable IA32 PAE paging.
>>
> 
> Good idea. I'll try it. Thanks.
> 
>>>
>>>>> ---
>>>>>     MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
>>>>>     MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
>>>>>     MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
>>>>>     MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
>>>> ++++++++++++++++++++++++++-
>>>>>     MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
>>>>>     MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
>>>>>     MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
>>>>>     MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
>>>>>     8 files changed, 519 insertions(+), 26 deletions(-)
>>>>>
>>>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
>>>> b/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> index 2dec9da5e3..ae75cc5b25 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
>> ANY
>>>> KIND, EITHER EXPRESS OR IMPLIED.
>>>>>     #include <Library/DxeServicesLib.h>
>>>>>     #include <Library/DebugAgentLib.h>
>>>>>     #include <Library/CpuExceptionHandlerLib.h>
>>>>> +#include <Library/DebugPrintErrorLevelLib.h>
>>>>>
>>>>>
>>>>>     //
>>>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>> b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> index 10375443c0..d91258c087 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> @@ -198,6 +198,7 @@
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
>> ##
>>>> CONSUMES
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
>>>> ## CONSUMES
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ##
>>>> CONSUMES
>>>>> +
>>>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>>>> ## CONSUMES
>>>>>
>>>>>     # [Hob]
>>>>>     # RESOURCE_DESCRIPTOR   ## CONSUMES
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> index d9c65a8045..e43ec74010 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
>>>>>         EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
>>>>>         UINTN                            Index;
>>>>>
>>>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>>>> +      return;
>>>>> +    }
>>>>
>>>> I am aware the purpose of this change to optimize some code when
>>>> DEBUG_GCD is not set.
>>>> But I do not suggest we newly introduce the dependency to
>>>> DebugPrintErrorLevelLib, I think this check should be hidden in the
>>>> instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL
>> (this
>>>> macro name is just for example) can be introduced if we really want to
>>>> do that.
>>>>
>>>
>>> There're two purpose here, one is for optimization but another is to fix a true
>>> issue here: GCD memory lock reentrance (see below for details)
>>>
>>>      CoreDumpGcdMemorySpaceMap()
>>> => CoreGetMemorySpaceMap()
>>> => CoreAcquireGcdMemoryLock () *
>>>      AllocatePool()
>>> => InternalAllocatePool()
>>> => CoreAllocatePool()
>>> => CoreAllocatePoolI()
>>> => CoreAllocatePoolPagesI()
>>> => CoreAllocatePoolPages()
>>> => FindFreePages()
>>> => PromoteMemoryResource()
>>> => CoreAcquireGcdMemoryLock() **
>>>
>>> If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix
>> above issue
>>> in a different way. For example, change CoreGetMemorySpaceMap() to avoid
>> lock
>>> conflict, if possible. In addition, we could set PcdFixedDebugPrintErrorLevel to
>> control
>>> the DEBUG_GCD print.
>>
>> Got it, thanks.
>> I still not suggest using GetDebugPrintErrorLevel with new
>> DebugPrintErrorLevelLib dependency.
>> Seemingly, the problem even could not be resolved thoroughly because it
>> may still happen when DEBUG_GCD is set, right?
>>
>> A possible solution is adding CoreAcquireGcdMemoryLockOrFail, updating
>> PromoteMemoryResource to use CoreAcquireGcdMemoryLockOrFail, and
>> updating the ASSERT in CoreDumpGcdMemorySpaceMap with error check.
>> Maybe there is other good idea.
>>
> 
> The actual issue is that, In the code of CoreGetMemorySpaceMap(), AllocatePool()
> is inside the GCD lock scope. Maybe we could move it out of the lock. Do you
> know if AllocatePool will change GCD memory map? I can't see it from code.
> But maybe I miss something.
> 
>    CoreAcquireGcdMemoryLock ();
>    ...
>    ...
>    *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>    ...
>    ...
> Done:
>    CoreReleaseGcdMemoryLock ();
> 
> =======>
> 
>    CoreAcquireGcdMemoryLock ();
>    ...
>    CoreReleaseGcdMemoryLock ();
>    ...
>    *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>    ...
>    CoreAcquireGcdMemoryLock ();
>    ...
> Done:
>    CoreReleaseGcdMemoryLock ();
> 
>>
>> Thanks,
>> Star
>>
>>>
>>>>
>>>> Thanks,
>>>> Star
>>>>
>>>>> +
>>>>>         Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
>>>> &MemorySpaceMap);
>>>>>         ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
>>>>>
>>>>> @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
>>>>>         EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
>>>>>         UINTN                        Index;
>>>>>
>>>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>>>> +      return;
>>>>> +    }
>>>>> +
>>>>>         Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
>>>>>         ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
>>>>>
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> index 663f969c0d..b120c04f8f 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>>>> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>>>     GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>>>> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>>>                                         = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>>>>>
>>>>> +//
>>>>> +// Used for Use-After-Free memory detection to promote freed but not
>> used
>>>> pages.
>>>>> +//
>>>>> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
>>>> mLastPromotedPage = BASE_4GB;
>>>>> +
>>>>>     /**
>>>>>       Set corresponding bits in bitmap table to 1 according to the address.
>>>>>
>>>>> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
>>>>>
>>>>>       @return An integer containing the guarded memory bitmap.
>>>>>     **/
>>>>> -UINTN
>>>>> +UINT64
>>>>>     GetGuardedMemoryBits (
>>>>>       IN EFI_PHYSICAL_ADDRESS    Address,
>>>>>       IN UINTN                   NumberOfPages
>>>>> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
>>>>>     {
>>>>>       UINT64            *BitMap;
>>>>>       UINTN             Bits;
>>>>> -  UINTN             Result;
>>>>> +  UINT64            Result;
>>>>>       UINTN             Shift;
>>>>>       UINTN             BitsToUnitEnd;
>>>>>
>>>>> @@ -1203,6 +1208,373 @@ SetAllGuardPages (
>>>>>       }
>>>>>     }
>>>>>
>>>>> +/**
>>>>> +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
>>>>> +
>>>>> +  @return TRUE/FALSE.
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +IsUafEnabled (
>>>>> +  VOID
>>>>> +  )
>>>>> +{
>>>>> +  ASSERT (!IsHeapGuardEnabled());
>>>>> +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Find the address of top-most guarded free page.
>>>>> +
>>>>> +  @param[out]  Address    Start address of top-most guarded free page.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +GetLastGuardedFreePageAddress (
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *Address
>>>>> +  )
>>>>> +{
>>>>> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
>>>>> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
>>>>> +  UINTN                   Level;
>>>>> +  UINT64                  Map;
>>>>> +  INTN                    Index;
>>>>> +
>>>>> +  ASSERT (mMapLevel >= 1);
>>>>> +
>>>>> +  BaseAddress = 0;
>>>>> +  Map = mGuardedMemoryMap;
>>>>> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
>>>>> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
>>>>> +       ++Level) {
> Path: news.gmane.org!.POSTED!not-for-mail
> From: "Wang, Jian J" <jian.j.wang@intel.com>
> Newsgroups: gmane.comp.bios.edk2.devel
> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory
>   detection
> Date: Tue, 23 Oct 2018 01:24:59 +0000
> Lines: 1030
> Approved: news@gmane.org
> Message-ID: <D827630B58408649ACB04F44C510003624E86337@SHSMSX103.ccr.corp.intel.com>
> References: <20181019015013.7488-1-jian.j.wang@intel.com>
>   <20181019015013.7488-4-jian.j.wang@intel.com>
>   <e1b4631e-8130-b5e5-8ec7-f9b2fdf3a866@intel.com>
>   <D827630B58408649ACB04F44C510003624E85BC9@SHSMSX103.ccr.corp.intel.com>
>   <7f42fbf3-1cd8-6ce9-1257-731cab01a07c@intel.com>
> NNTP-Posting-Host: blaine.gmane.org
> Mime-Version: 1.0
> Content-Type: text/plain; charset="us-ascii"
> Content-Transfer-Encoding: 7bit
> X-Trace: blaine.gmane.org 1540257783 16198 195.159.176.226 (23 Oct 2018 01:23:03 GMT)
> X-Complaints-To: usenet@blaine.gmane.org
> NNTP-Posting-Date: Tue, 23 Oct 2018 01:23:03 +0000 (UTC)
> Cc: "Kinney, Michael D" <michael.d.kinney@intel.com>, "Ni,
>   Ruiyu" <ruiyu.ni@intel.com>, "Yao, Jiewen" <jiewen.yao@intel.com>,
>   Laszlo Ersek <lersek@redhat.com>
> To: "Zeng, Star" <star.zeng@intel.com>, "edk2-devel@lists.01.org"
>   <edk2-devel@lists.01.org>
> Original-X-From: edk2-devel-bounces@lists.01.org Tue Oct 23 03:22:58 2018
> Return-path: <edk2-devel-bounces@lists.01.org>
> Envelope-to: gcbed-edk2@m.gmane.org
> Original-Received: from ml01.01.org ([198.145.21.10])
> 	by blaine.gmane.org with esmtp (Exim 4.84_2)
> 	(envelope-from <edk2-devel-bounces@lists.01.org>)
> 	id 1gElPL-0003zZ-6O
> 	for gcbed-edk2@m.gmane.org; Tue, 23 Oct 2018 03:22:55 +0200
> Original-Received: from [127.0.0.1] (localhost [IPv6:::1])
> 	by ml01.01.org (Postfix) with ESMTP id 7535B2117D267;
> 	Mon, 22 Oct 2018 18:25:05 -0700 (PDT)
> X-Original-To: edk2-devel@lists.01.org
> Delivered-To: edk2-devel@lists.01.org
> Received-SPF: Pass (sender SPF authorized) identity=mailfrom;
>   client-ip=192.55.52.115; helo=mga14.intel.com;
>   envelope-from=jian.j.wang@intel.com; receiver=edk2-devel@lists.01.org
> Original-Received: from mga14.intel.com (mga14.intel.com [192.55.52.115])
>   (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
>   (No client certificate requested)
>   by ml01.01.org (Postfix) with ESMTPS id A1D2721962301
>   for <edk2-devel@lists.01.org>; Mon, 22 Oct 2018 18:25:04 -0700 (PDT)
> X-Amp-Result: SKIPPED(no attachment in message)
> X-Amp-File-Uploaded: False
> Original-Received: from orsmga006.jf.intel.com ([10.7.209.51])
>   by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
>   22 Oct 2018 18:25:03 -0700
> X-ExtLoop1: 1
> X-IronPort-AV: E=Sophos;i="5.54,414,1534834800"; d="scan'208";a="84756666"
> Original-Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204])
>   by orsmga006.jf.intel.com with ESMTP; 22 Oct 2018 18:25:03 -0700
> Original-Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by
>   FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS)
>   id 14.3.319.2; Mon, 22 Oct 2018 18:25:02 -0700
> Original-Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
>   fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server (TLS)
>   id 14.3.319.2; Mon, 22 Oct 2018 18:25:02 -0700
> Original-Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.224]) by
>   SHSMSX101.ccr.corp.intel.com ([169.254.1.46]) with mapi id 14.03.0319.002;
>   Tue, 23 Oct 2018 09:24:59 +0800
> Thread-Topic: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>   memory detection
> Thread-Index: AQHUZ04yDM8+MiGRvEexlNQFNV1LQKUqEAGAgAC2bHD//6XjAIABoN0w
> In-Reply-To: <7f42fbf3-1cd8-6ce9-1257-731cab01a07c@intel.com>
> Accept-Language: en-US
> Content-Language: en-US
> X-MS-Has-Attach:
> X-MS-TNEF-Correlator:
> x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZTQyYmM5MjItOTFjMC00OWYxLTkxNGUtOTEzYTc4NDgxZTA5IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiXC9YeWlQS1lpWlRBTEdTUUtpV3ZhYVkrVjJnc3E1YkZiQnpzZXppdkpJdjA2VVU0UFwvNFZQYmQwZ3BPTGRhUWtsIn0=
> x-ctpclassification: CTP_NT
> dlp-product: dlpe-windows
> dlp-version: 11.0.400.15
> dlp-reaction: no-action
> x-originating-ip: [10.239.127.40]
> X-BeenThere: edk2-devel@lists.01.org
> X-Mailman-Version: 2.1.29
> Precedence: list
> List-Id: EDK II Development  <edk2-devel.lists.01.org>
> List-Unsubscribe: <https://lists.01.org/mailman/options/edk2-devel>,
>   <mailto:edk2-devel-request@lists.01.org?subject=unsubscribe>
> List-Archive: <http://lists.01.org/pipermail/edk2-devel/>
> List-Post: <mailto:edk2-devel@lists.01.org>
> List-Help: <mailto:edk2-devel-request@lists.01.org?subject=help>
> List-Subscribe: <https://lists.01.org/mailman/listinfo/edk2-devel>,
>   <mailto:edk2-devel-request@lists.01.org?subject=subscribe>
> Errors-To: edk2-devel-bounces@lists.01.org
> Original-Sender: "edk2-devel" <edk2-devel-bounces@lists.01.org>
> Xref: news.gmane.org gmane.comp.bios.edk2.devel:46520
> Archived-At: <http://permalink.gmane.org/gmane.comp.bios.edk2.devel/46520>
> 
> Star,
> 
> Regards,
> Jian
> 
> 
>> -----Original Message-----
>> From: Zeng, Star
>> Sent: Monday, October 22, 2018 4:24 PM
>> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
>> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
>> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>> memory detection
>>
>> On 2018/10/22 15:12, Wang, Jian J wrote:
>>> Star,
>>>
>>> Thanks for the comment. Please see my opinions below.
>>>
>>> Regards,
>>> Jian
>>>
>>>
>>>> -----Original Message-----
>>>> From: Zeng, Star
>>>> Sent: Monday, October 22, 2018 10:53 AM
>>>> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
>>>> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Ni, Ruiyu
>>>> <ruiyu.ni@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Laszlo Ersek
>>>> <lersek@redhat.com>; Zeng, Star <star.zeng@intel.com>
>>>> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
>>>> memory detection
>>>>
>>>> On 2018/10/19 9:50, Jian J Wang wrote:
>>>>> UAF (Use-After-Free) memory detection is new feature introduced to
>>>>> detect illegal access to memory which has been freed. The principle
>>>>> behind is similar to heap guard feature, that is the core turn all pool
>>>>> memory allocation to page allocation and mark them to be not-present
>>>>> once they are freed.
>>>>>
>>>>> This also implies that, once a page is allocated and freed, it cannot
>>>>> be re-allocated. This will bring another issue, which is that there's
>>>>> risk that memory space will be used out. To address it, the memory
>>>>> service add logic to put part (at most 64 pages a time) of freed pages
>>>>> back into page pool, so that the memory service can still have memory
>>>>> to allocate, when all memory space have been allocated once. This is
>>>>> called memory promotion. The promoted pages are always from the eldest
>>>>> pages which haven been freed.
>>>>>
>>>>> To use this feature, one can simply set following PCD to 1
>>>>>
>>>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>>>>>
>>>>> Please note this feature cannot be used with heap guard feature controlled
>>>>> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
>>>>>
>>>>> Cc: Star Zeng <star.zeng@intel.com>
>>>>> Cc: Michael D Kinney <michael.d.kinney@intel.com>
>>>>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>>>>> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
>>>>> Cc: Laszlo Ersek <lersek@redhat.com>
>>>>> Contributed-under: TianoCore Contribution Agreement 1.1
>>>>> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
>>>>
>>>> I did not review the whole patch thoroughly. But I suggest that putting
>>>> the code for UAF in a separated file if it is feasible. I am aware that
>>>> UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
>>>> HeapGuard.c and GuardPage.c (this file name is just for example)?
>>>>
>>>
>>> I think we can take the UAF as a special kind of heap guard: guarding the
>>> freed heap memory instead of guarding the allocated heap memory. So
>>> UAF is a complement of Heap Guard or part of Heap Guard. From this
>>> perspective, it's not necessary to put it in separate file.
>>
>> Very good point. From this perspective, I agree.
>> Then I would like to suggest not introducing a new PCD
>> PcdUseAfterFreeDetectionPropertyMask, but use one BIT of
>> PcdHeapGuardPropertyMask for it since we think it is part of Heap Guard.
>> The benefits are
>> 1. No confusion between Heap Guard and Use After Free.
>> 2. No need introduce new PCD.
>> 3. Can reuse BIT6 - Enable non-stop mode.
>> 4. No need update DxeIplPeim to enable IA32 PAE paging.
>>
> 
> Good idea. I'll try it. Thanks.
> 
>>>
>>>>> ---
>>>>>     MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
>>>>>     MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
>>>>>     MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
>>>>>     MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
>>>> ++++++++++++++++++++++++++-
>>>>>     MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
>>>>>     MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
>>>>>     MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
>>>>>     MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
>>>>>     8 files changed, 519 insertions(+), 26 deletions(-)
>>>>>
>>>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
>>>> b/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> index 2dec9da5e3..ae75cc5b25 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
>>>>> @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
>> ANY
>>>> KIND, EITHER EXPRESS OR IMPLIED.
>>>>>     #include <Library/DxeServicesLib.h>
>>>>>     #include <Library/DebugAgentLib.h>
>>>>>     #include <Library/CpuExceptionHandlerLib.h>
>>>>> +#include <Library/DebugPrintErrorLevelLib.h>
>>>>>
>>>>>
>>>>>     //
>>>>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>> b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> index 10375443c0..d91258c087 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
>>>>> @@ -198,6 +198,7 @@
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
>> ##
>>>> CONSUMES
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
>>>> ## CONSUMES
>>>>>       gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                           ##
>>>> CONSUMES
>>>>> +
>>>> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
>>>> ## CONSUMES
>>>>>
>>>>>     # [Hob]
>>>>>     # RESOURCE_DESCRIPTOR   ## CONSUMES
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> index d9c65a8045..e43ec74010 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
>>>>> @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
>>>>>         EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
>>>>>         UINTN                            Index;
>>>>>
>>>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>>>> +      return;
>>>>> +    }
>>>>
>>>> I am aware the purpose of this change to optimize some code when
>>>> DEBUG_GCD is not set.
>>>> But I do not suggest we newly introduce the dependency to
>>>> DebugPrintErrorLevelLib, I think this check should be hidden in the
>>>> instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL
>> (this
>>>> macro name is just for example) can be introduced if we really want to
>>>> do that.
>>>>
>>>
>>> There're two purpose here, one is for optimization but another is to fix a true
>>> issue here: GCD memory lock reentrance (see below for details)
>>>
>>>      CoreDumpGcdMemorySpaceMap()
>>> => CoreGetMemorySpaceMap()
>>> => CoreAcquireGcdMemoryLock () *
>>>      AllocatePool()
>>> => InternalAllocatePool()
>>> => CoreAllocatePool()
>>> => CoreAllocatePoolI()
>>> => CoreAllocatePoolPagesI()
>>> => CoreAllocatePoolPages()
>>> => FindFreePages()
>>> => PromoteMemoryResource()
>>> => CoreAcquireGcdMemoryLock() **
>>>
>>> If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix
>> above issue
>>> in a different way. For example, change CoreGetMemorySpaceMap() to avoid
>> lock
>>> conflict, if possible. In addition, we could set PcdFixedDebugPrintErrorLevel to
>> control
>>> the DEBUG_GCD print.
>>
>> Got it, thanks.
>> I still not suggest using GetDebugPrintErrorLevel with new
>> DebugPrintErrorLevelLib dependency.
>> Seemingly, the problem even could not be resolved thoroughly because it
>> may still happen when DEBUG_GCD is set, right?
>>
>> A possible solution is adding CoreAcquireGcdMemoryLockOrFail, updating
>> PromoteMemoryResource to use CoreAcquireGcdMemoryLockOrFail, and
>> updating the ASSERT in CoreDumpGcdMemorySpaceMap with error check.
>> Maybe there is other good idea.
>>
> 
> The actual issue is that, In the code of CoreGetMemorySpaceMap(), AllocatePool()
> is inside the GCD lock scope. Maybe we could move it out of the lock. Do you
> know if AllocatePool will change GCD memory map? I can't see it from code.
> But maybe I miss something.
> 
>    CoreAcquireGcdMemoryLock ();
>    ...
>    ...
>    *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>    ...
>    ...
> Done:
>    CoreReleaseGcdMemoryLock ();
> 
> =======>
> 
>    CoreAcquireGcdMemoryLock ();
>    ...
>    CoreReleaseGcdMemoryLock ();
>    ...
>    *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof (EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
>    ...
>    CoreAcquireGcdMemoryLock ();
>    ...
> Done:
>    CoreReleaseGcdMemoryLock ();

Good idea. But we may need to recalculate the NumberOfDescriptors in the 
lock and check whether it has been updated and needs to reallocatepool 
or not.

Thanks,
Star

> 
>>
>> Thanks,
>> Star
>>
>>>
>>>>
>>>> Thanks,
>>>> Star
>>>>
>>>>> +
>>>>>         Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
>>>> &MemorySpaceMap);
>>>>>         ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
>>>>>
>>>>> @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
>>>>>         EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
>>>>>         UINTN                        Index;
>>>>>
>>>>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
>>>>> +      return;
>>>>> +    }
>>>>> +
>>>>>         Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
>>>>>         ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
>>>>>
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> index 663f969c0d..b120c04f8f 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
>>>>> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>>>> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>>>     GLOBAL_REMOVE_IF_UNREFERENCED UINTN
>>>> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
>>>>>                                         = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>>>>>
>>>>> +//
>>>>> +// Used for Use-After-Free memory detection to promote freed but not
>> used
>>>> pages.
>>>>> +//
>>>>> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
>>>> mLastPromotedPage = BASE_4GB;
>>>>> +
>>>>>     /**
>>>>>       Set corresponding bits in bitmap table to 1 according to the address.
>>>>>
>>>>> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
>>>>>
>>>>>       @return An integer containing the guarded memory bitmap.
>>>>>     **/
>>>>> -UINTN
>>>>> +UINT64
>>>>>     GetGuardedMemoryBits (
>>>>>       IN EFI_PHYSICAL_ADDRESS    Address,
>>>>>       IN UINTN                   NumberOfPages
>>>>> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
>>>>>     {
>>>>>       UINT64            *BitMap;
>>>>>       UINTN             Bits;
>>>>> -  UINTN             Result;
>>>>> +  UINT64            Result;
>>>>>       UINTN             Shift;
>>>>>       UINTN             BitsToUnitEnd;
>>>>>
>>>>> @@ -1203,6 +1208,373 @@ SetAllGuardPages (
>>>>>       }
>>>>>     }
>>>>>
>>>>> +/**
>>>>> +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
>>>>> +
>>>>> +  @return TRUE/FALSE.
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +IsUafEnabled (
>>>>> +  VOID
>>>>> +  )
>>>>> +{
>>>>> +  ASSERT (!IsHeapGuardEnabled());
>>>>> +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Find the address of top-most guarded free page.
>>>>> +
>>>>> +  @param[out]  Address    Start address of top-most guarded free page.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +GetLastGuardedFreePageAddress (
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *Address
>>>>> +  )
>>>>> +{
>>>>> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
>>>>> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
>>>>> +  UINTN                   Level;
>>>>> +  UINT64                  Map;
>>>>> +  INTN                    Index;
>>>>> +
>>>>> +  ASSERT (mMapLevel >= 1);
>>>>> +
>>>>> +  BaseAddress = 0;
>>>>> +  Map = mGuardedMemoryMap;
>>>>> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
>>>>> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
>>>>> +       ++Level) {
>>>>> +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
>>>>> +
>>>>> +    //
>>>>> +    // Find the non-NULL entry at largest index.
>>>>> +    //
>>>>> +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
>>>>> +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
>>>>> +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
>>>>> +        Map = ((UINT64 *)(UINTN)Map)[Index];
>>>>> +        break;
>>>>> +      }
>>>>> +    }
>>>>> +  }
>>>>> +
>>>>> +  //
>>>>> +  // Find the non-zero MSB then get the page address.
>>>>> +  //
>>>>> +  while (Map != 0) {
>>>>> +    Map = RShiftU64 (Map, 1);
>>>>> +    BaseAddress += EFI_PAGES_TO_SIZE (1);
>>>>> +  }
>>>>> +
>>>>> +  *Address = BaseAddress;
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Record freed pages.
>>>>> +
>>>>> +  @param[in]  BaseAddress   Base address of just freed pages.
>>>>> +  @param[in]  Pages         Number of freed pages.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +MarkFreedPages (
>>>>> +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
>>>>> +  IN UINTN                    Pages
>>>>> +  )
>>>>> +{
>>>>> +  SetGuardedMemoryBits (BaseAddress, Pages);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Record freed pages as well as mark them as not-present, if possible.
>>>>> +
>>>>> +  @param[in]  BaseAddress   Base address of just freed pages.
>>>>> +  @param[in]  Pages         Number of freed pages.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +EFIAPI
>>>>> +GuardFreedPages (
>>>>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
>>>>> +  IN  UINTN                   Pages
>>>>> +  )
>>>>> +{
>>>>> +  EFI_STATUS      Status;
>>>>> +
>>>>> +  //
>>>>> +  // Do nothing if
>>>>> +  // - Use-After-Free detection is not enabled
>>>>> +  // - Freed memory is legacy memory lower than 1MB
>>>>> +  //
>>>>> +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>> +  MarkFreedPages (BaseAddress, Pages);
>>>>> +  if (gCpu != NULL) {
>>>>> +    //
>>>>> +    // Set flag to make sure allocating memory without GUARD for page
>> table
>>>>> +    // operation; otherwise infinite loops could be caused.
>>>>> +    //
>>>>> +    mOnGuarding = TRUE;
>>>>> +    //
>>>>> +    // Note: This might overwrite other attributes needed by other features,
>>>>> +    // such as NX memory protection.
>>>>> +    //
>>>>> +    Status = gCpu->SetMemoryAttributes (
>>>>> +                     gCpu,
>>>>> +                     BaseAddress,
>>>>> +                     EFI_PAGES_TO_SIZE(Pages),
>>>>> +                     EFI_MEMORY_RP
>>>>> +                     );
>>>>> +    //
>>>>> +    // Normally we should check the returned Status. But there might be
>>>> memory
>>>>> +    // alloc/free involved in SetMemoryAttributes(), which would fail this
>>>>> +    // calling. It's rare case so it's OK to let a few tiny holes be not-guarded.
>>>>> +    //
>>>>> +    if (EFI_ERROR (Status)) {
>>>>> +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n",
>>>> BaseAddress, Pages));
>>>>> +    }
>>>>> +    mOnGuarding = FALSE;
>>>>> +  }
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Mark all pages freed before CPU Arch Protocol as not-present.
>>>>> +
>>>>> +**/
>>>>> +VOID
>>>>> +GuardAllFreedPages (
>>>>> +  VOID
>>>>> +  )
>>>>> +{
>>>>> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
>>>>> +  UINT64    TableEntry;
>>>>> +  UINT64    Address;
>>>>> +  UINT64    GuardPage;
>>>>> +  INTN      Level;
>>>>> +  UINTN     BitIndex;
>>>>> +  UINTN     GuardPageNumber;
>>>>> +
>>>>> +  if (mGuardedMemoryMap == 0 ||
>>>>> +      mMapLevel == 0 ||
>>>>> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
>>>>> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
>>>>> +
>>>>> +  SetMem (Tables, sizeof(Tables), 0);
>>>>> +  SetMem (Addresses, sizeof(Addresses), 0);
>>>>> +  SetMem (Indices, sizeof(Indices), 0);
>>>>> +
>>>>> +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
>>>>> +  Tables[Level]   = mGuardedMemoryMap;
>>>>> +  Address         = 0;
>>>>> +  GuardPage       = (UINT64)-1;
>>>>> +  GuardPageNumber = 0;
>>>>> +
>>>>> +  while (TRUE) {
>>>>> +    if (Indices[Level] > Entries[Level]) {
>>>>> +      Tables[Level] = 0;
>>>>> +      Level        -= 1;
>>>>> +    } else {
>>>>> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
>>>>> +      Address     = Addresses[Level];
>>>>> +
>>>>> +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
>>>>> +        Level            += 1;
>>>>> +        Tables[Level]     = TableEntry;
>>>>> +        Addresses[Level]  = Address;
>>>>> +        Indices[Level]    = 0;
>>>>> +
>>>>> +        continue;
>>>>> +      } else {
>>>>> +        BitIndex = 1;
>>>>> +        while (BitIndex != 0) {
>>>>> +          if ((TableEntry & BitIndex) != 0) {
>>>>> +            if (GuardPage == (UINT64)-1) {
>>>>> +              GuardPage = Address;
>>>>> +            }
>>>>> +            ++GuardPageNumber;
>>>>> +          } else if (GuardPageNumber > 0) {
>>>>> +            GuardFreedPages (GuardPage, GuardPageNumber);
>>>>> +            GuardPageNumber = 0;
>>>>> +            GuardPage       = (UINT64)-1;
>>>>> +          }
>>>>> +
>>>>> +          if (TableEntry == 0) {
>>>>> +            break;
>>>>> +          }
>>>>> +
>>>>> +          Address += EFI_PAGES_TO_SIZE (1);
>>>>> +          BitIndex = LShiftU64(BitIndex, 1);
>>>>> +        }
>>>>> +      }
>>>>> +    }
>>>>> +
>>>>> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
>>>>> +      break;
>>>>> +    }
>>>>> +
>>>>> +    Indices[Level] += 1;
>>>>> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
>>>>> +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
>>>>> +
>>>>> +  }
>>>>> +
>>>>> +  //
>>>>> +  // Update the maximum address of freed page which can be used for
>>>> memory
>>>>> +  // promotion upon out-of-memory-space.
>>>>> +  //
>>>>> +  GetLastGuardedFreePageAddress (&Address);
>>>>> +  if (Address != 0) {
>>>>> +    mLastPromotedPage = Address;
>>>>> +  }
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  This function checks to see if the given memory map descriptor in a
>> memory
>>>> map
>>>>> +  can be merged with any guarded free pages.
>>>>> +
>>>>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
>>>>> +  @param  MaxAddress        Maximum address to stop the merge.
>>>>> +
>>>>> +  @return VOID
>>>>> +
>>>>> +**/
>>>>> +VOID
>>>>> +MergeGuardPages (
>>>>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
>>>>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
>>>>> +  )
>>>>> +{
>>>>> +  EFI_PHYSICAL_ADDRESS        EndAddress;
>>>>> +  UINT64                      Bitmap;
>>>>> +  INTN                        Pages;
>>>>> +
>>>>> +  if (!IsUafEnabled () ||
>>>>> +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>> +  Bitmap = 0;
>>>>> +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
>>>>> PhysicalStart);
>>>>> +  Pages -= MemoryMapEntry->NumberOfPages;
>>>>> +  while (Pages > 0) {
>>>>> +    if (Bitmap == 0) {
>>>>> +      EndAddress = MemoryMapEntry->PhysicalStart +
>>>>> +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
>>>>> +      Bitmap = GetGuardedMemoryBits(EndAddress,
>>>> GUARDED_HEAP_MAP_ENTRY_BITS);
>>>>> +    }
>>>>> +
>>>>> +    if ((Bitmap & 1) == 0) {
>>>>> +      break;
>>>>> +    }
>>>>> +
>>>>> +    Pages--;
>>>>> +    MemoryMapEntry->NumberOfPages++;
>>>>> +    Bitmap = RShiftU64 (Bitmap, 1);
>>>>> +  }
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> +  Put part (at most 64 pages a time) guarded free pages back to free page
>>>> pool.
>>>>> +
>>>>> +  Use-After-Free detection makes use of 'Used then throw away' way to
>>>> detect
>>>>> +  any illegal access to freed memory. The thrown-away memory will be
>>>> marked as
>>>>> +  not-present so that any access to those memory (after free) will trigger
>>>>> +  page-fault.
>>>>> +
>>>>> +  The problem is that this will consume lots of memory space. Once no
>>>> memory
>>>>> +  left in pool to allocate, we have to restore part of the freed pages to their
>>>>> +  normal function. Otherwise the whole system will stop functioning.
>>>>> +
>>>>> +  @param  StartAddress    Start address of promoted memory.
>>>>> +  @param  EndAddress      End address of promoted memory.
>>>>> +
>>>>> +  @return TRUE    Succeeded to promote memory.
>>>>> +  @return FALSE   No free memory found.
>>>>> +
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +PromoteGuardedFreePages (
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
>>>>> +  )
>>>>> +{
>>>>> +  EFI_STATUS              Status;
>>>>> +  UINTN                   AvailablePages;
>>>>> +  UINT64                  Bitmap;
>>>>> +  EFI_PHYSICAL_ADDRESS    Start;
>>>>> +
>>>>> +  if (!IsUafEnabled ()) {
>>>>> +    return FALSE;
>>>>> +  }
>>>>> +
>>>>> +  //
>>>>> +  // Similar to memory allocation service, always search the freed pages in
>>>>> +  // descending direction.
>>>>> +  //
>>>>> +  Start           = mLastPromotedPage;
>>>>> +  AvailablePages  = 0;
>>>>> +  while (AvailablePages == 0) {
>>>>> +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
>>>>> +    //
>>>>> +    // If the address wraps around, try the really freed pages at top.
>>>>> +    //
>>>>> +    if (Start > mLastPromotedPage) {
>>>>> +      GetLastGuardedFreePageAddress (&Start);
>>>>> +      ASSERT (Start != 0);
>>>>> +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
>>>>> +    }
>>>>> +
>>>>> +    Bitmap = GetGuardedMemoryBits(Start,
>>>> GUARDED_HEAP_MAP_ENTRY_BITS);
>>>>> +    while (Bitmap > 0) {
>>>>> +      if ((Bitmap & 1) != 0) {
>>>>> +        ++AvailablePages;
>>>>> +      } else if (AvailablePages == 0) {
>>>>> +        Start += EFI_PAGES_TO_SIZE (1);
>>>>> +      } else {
>>>>> +        break;
>>>>> +      }
>>>>> +
>>>>> +      Bitmap = RShiftU64(Bitmap, 1);
>>>>> +    }
>>>>> +  }
>>>>> +
>>>>> +  if (AvailablePages) {
>>>>> +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start,
>>>> AvailablePages));
>>>>> +    if (gCpu != NULL) {
>>>>> +      //
>>>>> +      // Set flag to make sure allocating memory without GUARD for page
>> table
>>>>> +      // operation; otherwise infinite loops could be caused.
>>>>> +      //
>>>>> +      mOnGuarding = TRUE;
>>>>> +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
>>>> EFI_PAGES_TO_SIZE(AvailablePages), 0);
>>>>> +      ASSERT_EFI_ERROR (Status);
>>>>> +      mOnGuarding = FALSE;
>>>>> +    }
>>>>> +
>>>>> +    mLastPromotedPage = Start;
>>>>> +    *StartAddress     = Start;
>>>>> +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
>>>>> +    return TRUE;
>>>>> +  }
>>>>> +
>>>>> +  return FALSE;
>>>>> +}
>>>>> +
>>>>>     /**
>>>>>       Notify function used to set all Guard pages before CPU Arch Protocol
>>>> installed.
>>>>>     **/
>>>>> @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
>>>>>       )
>>>>>     {
>>>>>       ASSERT (gCpu != NULL);
>>>>> -  SetAllGuardPages ();
>>>>> +
>>>>> +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
>>>>> +    SetAllGuardPages ();
>>>>> +  }
>>>>> +
>>>>> +  if (IsUafEnabled ()) {
>>>>> +    GuardAllFreedPages ();
>>>>> +  }
>>>>>     }
>>>>>
>>>>>     /**
>>>>> @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
>>>>>       CHAR8     *Ruler1;
>>>>>       CHAR8     *Ruler2;
>>>>>
>>>>> +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>> +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
>>>>> +    return;
>>>>> +  }
>>>>> +
>>>>>       if (mGuardedMemoryMap == 0 ||
>>>>>           mMapLevel == 0 ||
>>>>>           mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>>> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>>>> index 8c34692439..3f2a5283c2 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
>>>>> @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
>>>>>       VOID
>>>>>       );
>>>>>
>>>>> +/**
>>>>> +  This function checks to see if the given memory map descriptor in a
>> memory
>>>> map
>>>>> +  can be merged with any guarded free pages.
>>>>> +
>>>>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
>>>>> +  @param  MaxAddress        Maximum address to stop the merge.
>>>>> +
>>>>> +  @return VOID
>>>>> +
>>>>> +**/
>>>>> +VOID
>>>>> +MergeGuardPages (
>>>>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
>>>>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
>>>>> +  );
>>>>> +
>>>>> +/**
>>>>> +  Record freed pages as well as mark them as not-present, if possible.
>>>>> +
>>>>> +  @param[in]  BaseAddress   Base address of just freed pages.
>>>>> +  @param[in]  Pages         Number of freed pages.
>>>>> +
>>>>> +  @return VOID.
>>>>> +**/
>>>>> +VOID
>>>>> +EFIAPI
>>>>> +GuardFreedPages (
>>>>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
>>>>> +  IN  UINTN                   Pages
>>>>> +  );
>>>>> +
>>>>> +/**
>>>>> +  Check to see if the use-after-free (UAF) feature is enabled or not.
>>>>> +
>>>>> +  @return TRUE/FALSE.
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +IsUafEnabled (
>>>>> +  VOID
>>>>> +  );
>>>>> +
>>>>> +/**
>>>>> +  Put part (at most 64 pages a time) guarded free pages back to free page
>>>> pool.
>>>>> +
>>>>> +  Use-After-Free detection makes use of 'Used then throw away' way to
>>>> detect
>>>>> +  any illegal access to freed memory. The thrown-away memory will be
>>>> marked as
>>>>> +  not-present so that any access to those memory (after free) will trigger
>>>>> +  page-fault.
>>>>> +
>>>>> +  The problem is that this will consume lots of memory space. Once no
>>>> memory
>>>>> +  left in pool to allocate, we have to restore part of the freed pages to their
>>>>> +  normal function. Otherwise the whole system will stop functioning.
>>>>> +
>>>>> +  @param  StartAddress    Start address of promoted memory.
>>>>> +  @param  EndAddress      End address of promoted memory.
>>>>> +
>>>>> +  @return TRUE    Succeeded to promote memory.
>>>>> +  @return FALSE   No free memory found.
>>>>> +
>>>>> +**/
>>>>> +BOOLEAN
>>>>> +PromoteGuardedFreePages (
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
>>>>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
>>>>> +  );
>>>>> +
>>>>>     extern BOOLEAN mOnGuarding;
>>>>>
>>>>>     #endif
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
>>>> b/MdeModulePkg/Core/Dxe/Mem/Page.c
>>>>> index 3b4cc08e7c..9f4419fa5b 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
>>>>> @@ -401,9 +401,11 @@ PromoteMemoryResource (
>>>>>       VOID
>>>>>       )
>>>>>     {
>>>>> -  LIST_ENTRY         *Link;
>>>>> -  EFI_GCD_MAP_ENTRY  *Entry;
>>>>> -  BOOLEAN            Promoted;
>>>>> +  LIST_ENTRY            *Link;
>>>>> +  EFI_GCD_MAP_ENTRY     *Entry;
>>>>> +  BOOLEAN               Promoted;
>>>>> +  EFI_PHYSICAL_ADDRESS  StartAddress;
>>>>> +  EFI_PHYSICAL_ADDRESS  EndAddress;
>>>>>
>>>>>       DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
>>>>>
>>>>> @@ -451,6 +453,24 @@ PromoteMemoryResource (
>>>>>
>>>>>       CoreReleaseGcdMemoryLock ();
>>>>>
>>>>> +  if (!Promoted) {
>>>>> +    //
>>>>> +    // If Use-After-Free detection is enabled, we might promote pages from
>>>>> +    // guarded free pages.
>>>>> +    //
>>>>> +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
>>>>> +    if (Promoted) {
>>>>> +      CoreAcquireGcdMemoryLock ();
>>>>> +      CoreAddRange (
>>>>> +        EfiConventionalMemory,
>>>>> +        StartAddress,
>>>>> +        EndAddress,
>>>>> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
>>>> EFI_MEMORY_WB
>>>>> +        );
>>>>> +      CoreReleaseGcdMemoryLock ();
>>>>> +    }
>>>>> +  }
>>>>> +
>>>>>       return Promoted;
>>>>>     }
>>>>>     /**
>>>>> @@ -896,9 +916,13 @@ CoreConvertPagesEx (
>>>>>         }
>>>>>
>>>>>         //
>>>>> -    // Add our new range in
>>>>> +    // Add our new range in. Don't do this for freed pages if Use-After-Free
>>>>> +    // detection is enabled.
>>>>>         //
>>>>> -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
>>>>> +    if (!IsUafEnabled () || !ChangingType || MemType !=
>>>> EfiConventionalMemory) {
>>>>> +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
>>>>> +    }
>>>>> +
>>>>>         if (ChangingType && (MemType == EfiConventionalMemory)) {
>>>>>           //
>>>>>           // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
>>>> this
>>>>> @@ -1514,6 +1538,7 @@ CoreFreePages (
>>>>>
>>>>>       Status = CoreInternalFreePages (Memory, NumberOfPages,
>> &MemoryType);
>>>>>       if (!EFI_ERROR (Status)) {
>>>>> +    GuardFreedPages (Memory, NumberOfPages);
>>>>>         CoreUpdateProfile (
>>>>>           (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
>>>>>           MemoryProfileActionFreePages,
>>>>> @@ -1908,9 +1933,7 @@ Done:
>>>>>       *MemoryMapSize = BufferSize;
>>>>>
>>>>>       DEBUG_CODE (
>>>>> -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
>>>>> -      DumpGuardedMemoryBitmap ();
>>>>> -    }
>>>>> +    DumpGuardedMemoryBitmap ();
>>>>>       );
>>>>>
>>>>>       return Status;
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>>> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>>>> index 1ff2061f7f..0cd74267b3 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
>>>>> @@ -26,7 +26,8 @@ typedef struct {
>>>>>     } POOL_FREE;
>>>>>
>>>>>
>>>>> -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
>>>>> +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
>>>>> +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
>>>>>     typedef struct {
>>>>>       UINT32          Signature;
>>>>>       UINT32          Reserved;
>>>>> @@ -367,6 +368,7 @@ CoreAllocatePoolI (
>>>>>       UINTN       NoPages;
>>>>>       UINTN       Granularity;
>>>>>       BOOLEAN     HasPoolTail;
>>>>> +  BOOLEAN     PageAsPool;
>>>>>
>>>>>       ASSERT_LOCKED (&mPoolMemoryLock);
>>>>>
>>>>> @@ -386,6 +388,7 @@ CoreAllocatePoolI (
>>>>>
>>>>>       HasPoolTail  = !(NeedGuard &&
>>>>>                        ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
>>>>> +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
>>>>>
>>>>>       //
>>>>>       // Adjusting the Size to be of proper alignment so that
>>>>> @@ -406,7 +409,7 @@ CoreAllocatePoolI (
>>>>>       // If allocation is over max size, just allocate pages for the request
>>>>>       // (slow)
>>>>>       //
>>>>> -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
>>>>> +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
>>>>>         if (!HasPoolTail) {
>>>>>           Size -= sizeof (POOL_TAIL);
>>>>>         }
>>>>> @@ -498,7 +501,7 @@ Done:
>>>>>         //
>>>>>         // If we have a pool buffer, fill in the header & tail info
>>>>>         //
>>>>> -    Head->Signature = POOL_HEAD_SIGNATURE;
>>>>> +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
>>>> POOL_HEAD_SIGNATURE;
>>>>>         Head->Size      = Size;
>>>>>         Head->Type      = (EFI_MEMORY_TYPE) PoolType;
>>>>>         Buffer          = Head->Data;
>>>>> @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
>>>>>       CoreFreePoolPages (Memory, NoPages);
>>>>>       CoreReleaseMemoryLock ();
>>>>>
>>>>> +  GuardFreedPages (Memory, NoPages);
>>>>>       ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
>>>>>         (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE
>> (NoPages));
>>>>>     }
>>>>> @@ -685,15 +689,19 @@ CoreFreePoolI (
>>>>>       UINTN       Granularity;
>>>>>       BOOLEAN     IsGuarded;
>>>>>       BOOLEAN     HasPoolTail;
>>>>> +  BOOLEAN     PageAsPool;
>>>>>
>>>>>       ASSERT(Buffer != NULL);
>>>>>       //
>>>>>       // Get the head & tail of the pool entry
>>>>>       //
>>>>> -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
>>>>> +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
>>>>>       ASSERT(Head != NULL);
>>>>>
>>>>> -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
>>>>> +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
>>>>> +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
>>>>> +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
>>>>> +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>>>>>         return EFI_INVALID_PARAMETER;
>>>>>       }
>>>>>
>>>>> @@ -701,6 +709,7 @@ CoreFreePoolI (
>>>>>                     IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
>>>>>       HasPoolTail = !(IsGuarded &&
>>>>>                       ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
>>>>> +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
>>>>>
>>>>>       if (HasPoolTail) {
>>>>>         Tail = HEAD_TO_TAIL (Head);
>>>>> @@ -757,7 +766,7 @@ CoreFreePoolI (
>>>>>       //
>>>>>       // If it's not on the list, it must be pool pages
>>>>>       //
>>>>> -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
>>>>> +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
>>>>>
>>>>>         //
>>>>>         // Return the memory pages back to free memory
>>>>> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>>> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>>>> index 05eb4f422b..f6595c90ed 100644
>>>>> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>>>> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
>>>>> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
>> ANY
>>>> KIND, EITHER EXPRESS OR IMPLIED.
>>>>>     #include <Guid/PropertiesTable.h>
>>>>>
>>>>>     #include "DxeMain.h"
>>>>> +#include "HeapGuard.h"
>>>>>
>>>>>     #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
>>>>>       ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
>>>>> @@ -205,16 +206,13 @@ MergeMemoryMap (
>>>>>         NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
>> (MemoryMapEntry,
>>>> DescriptorSize);
>>>>>
>>>>>         do {
>>>>> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
>>>>> NumberOfPages));
>>>>> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
>>>>> PhysicalStart);
>>>>> +      MemoryBlockLength = (UINT64) (EfiPagesToSize
>> (NewMemoryMapEntry-
>>>>> NumberOfPages));
>>>>>           if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
>>>>> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
>>>>> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
>> &&
>>>>> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
>>>> NextMemoryMapEntry->PhysicalStart)) {
>>>>> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
>>>>> NumberOfPages;
>>>>> -        if (NewMemoryMapEntry != MemoryMapEntry) {
>>>>> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
>>>>> NumberOfPages;
>>>>> -        }
>>>>> -
>>>>> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
>>>>> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry-
>>> Attribute)
>>>> &&
>>>>> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
>>>> NextMemoryMapEntry->PhysicalStart)) {
>>>>> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
>>>>> NumberOfPages;
>>>>>             NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
>>>> (NextMemoryMapEntry, DescriptorSize);
>>>>>             continue;
>>>>>           } else {
>>>>>



  reply	other threads:[~2018-10-23  3:15 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-19  1:50 [PATCH 0/3] Add use-after-free memory detection Jian J Wang
2018-10-19  1:50 ` [PATCH 1/3] MdeModulePkg/MdeModulePkg.dec: add new PCD for UAF detection feature Jian J Wang
2018-10-19 11:27   ` Laszlo Ersek
2018-10-22  2:20   ` Zeng, Star
2018-10-19  1:50 ` [PATCH 2/3] UefiCpuPkg/CpuDxe: fix an infinite loop issue Jian J Wang
2018-10-19 11:45   ` Laszlo Ersek
2018-10-22  7:23     ` Wang, Jian J
2018-10-19  1:50 ` [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory detection Jian J Wang
2018-10-19 12:04   ` Laszlo Ersek
2018-10-22  7:34     ` Wang, Jian J
2018-10-22  2:53   ` Zeng, Star
2018-10-22  7:12     ` Wang, Jian J
2018-10-22  8:23       ` Zeng, Star
2018-10-23  1:24         ` Wang, Jian J
2018-10-23  3:14           ` Zeng, Star [this message]
2018-10-19  1:56 ` [PATCH 0/3] Add " Wang, Jian J

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=28767686-eeb4-6494-2b2f-3aa12099f1e8@intel.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox