* [PATCH 0/4] Support 5-level paging in DXE long mode @ 2019-07-22 8:15 Ni, Ray 2019-07-22 8:15 ` [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled Ni, Ray ` (5 more replies) 0 siblings, 6 replies; 23+ messages in thread From: Ni, Ray @ 2019-07-22 8:15 UTC (permalink / raw) To: devel Ray Ni (4): UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled UefiCpuPkg/CpuDxe: Support parsing 5-level page table MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable MdeModulePkg/DxeIpl: Create 5-level page table for long mode MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ MdeModulePkg/MdeModulePkg.dec | 7 + UefiCpuPkg/CpuDxe/CpuPageTable.c | 22 +- UefiCpuPkg/Library/MpInitLib/MpLib.c | 11 + UefiCpuPkg/Library/MpInitLib/MpLib.h | 6 +- UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc | 3 +- UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 14 +- 8 files changed, 209 insertions(+), 82 deletions(-) -- 2.21.0.windows.1 ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled 2019-07-22 8:15 [PATCH 0/4] Support 5-level paging in DXE long mode Ni, Ray @ 2019-07-22 8:15 ` Ni, Ray 2019-07-23 9:15 ` [edk2-devel] " Laszlo Ersek 2019-07-22 8:15 ` [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing 5-level page table Ni, Ray ` (4 subsequent siblings) 5 siblings, 1 reply; 23+ messages in thread From: Ni, Ray @ 2019-07-22 8:15 UTC (permalink / raw) To: devel; +Cc: Eric Dong, Laszlo Ersek REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 MpInitLib is the library that's responsible to wake up APs to provide MP PPI and Protocol services. The patch synchronizes BSP's CR4.LA57 to each AP's CR4.LA57. Without this change, AP may enter to GP fault when BSP's 5-level page table is set to AP during AP wakes up. Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> --- UefiCpuPkg/Library/MpInitLib/MpLib.c | 11 +++++++++++ UefiCpuPkg/Library/MpInitLib/MpLib.h | 6 +++++- UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc | 3 ++- UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 14 +++++++++++++- 4 files changed, 31 insertions(+), 3 deletions(-) diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.c b/UefiCpuPkg/Library/MpInitLib/MpLib.c index 6f51bc4ebf..e4691315e9 100644 --- a/UefiCpuPkg/Library/MpInitLib/MpLib.c +++ b/UefiCpuPkg/Library/MpInitLib/MpLib.c @@ -790,6 +790,7 @@ FillExchangeInfoData ( volatile MP_CPU_EXCHANGE_INFO *ExchangeInfo; UINTN Size; IA32_SEGMENT_DESCRIPTOR *Selector; + IA32_CR4 Cr4; ExchangeInfo = CpuMpData->MpCpuExchangeInfo; ExchangeInfo->Lock = 0; @@ -814,6 +815,16 @@ FillExchangeInfoData ( ExchangeInfo->InitializeFloatingPointUnitsAddress = (UINTN)InitializeFloatingPointUnits; + // + // We can check either CPUID(7).ECX[bit16] or check CR4.LA57[bit12] + // to determin whether 5-Level Paging is enabled. + // Using latter way is simpler because it also eliminates the needs to + // check whether platform wants to enable it. + // + Cr4.UintN = AsmReadCr4 (); + ExchangeInfo->Enable5LevelPaging = (BOOLEAN) (Cr4.Bits.LA57 == 1); + DEBUG ((DEBUG_INFO, "CpuMp: 5-Level Paging = %d\n", ExchangeInfo->Enable5LevelPaging)); + // // Get the BSP's data of GDT and IDT // diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h b/UefiCpuPkg/Library/MpInitLib/MpLib.h index f89037c59e..fa7d6b32e9 100644 --- a/UefiCpuPkg/Library/MpInitLib/MpLib.h +++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h @@ -1,7 +1,7 @@ /** @file Common header file for MP Initialize Library. - Copyright (c) 2016 - 2018, Intel Corporation. All rights reserved.<BR> + Copyright (c) 2016 - 2019, Intel Corporation. All rights reserved.<BR> SPDX-License-Identifier: BSD-2-Clause-Patent **/ @@ -185,6 +185,10 @@ typedef struct { UINT16 ModeTransitionSegment; UINT32 ModeHighMemory; UINT16 ModeHighSegment; + // + // Enable5LevelPaging indicates whether 5-level paging is enabled in long mode. + // + UINTN Enable5LevelPaging; } MP_CPU_EXCHANGE_INFO; #pragma pack() diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc b/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc index 467f54a860..58ef369342 100644 --- a/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc +++ b/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc @@ -1,5 +1,5 @@ ;------------------------------------------------------------------------------ ; -; Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR> +; Copyright (c) 2015 - 2019, Intel Corporation. All rights reserved.<BR> ; SPDX-License-Identifier: BSD-2-Clause-Patent ; ; Module Name: @@ -40,3 +40,4 @@ ModeTransitionMemoryLocation equ LockLocation + 94h ModeTransitionSegmentLocation equ LockLocation + 98h ModeHighMemoryLocation equ LockLocation + 9Ah ModeHighSegmentLocation equ LockLocation + 9Eh +Enable5LevelPagingLocation equ LockLocation + 0A0h diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm index cea90f3d4d..b563c2ed3e 100644 --- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm +++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm @@ -1,5 +1,5 @@ ;------------------------------------------------------------------------------ ; -; Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR> +; Copyright (c) 2015 - 2019, Intel Corporation. All rights reserved.<BR> ; SPDX-License-Identifier: BSD-2-Clause-Patent ; ; Module Name: @@ -100,6 +100,18 @@ SkipEnableExecuteDisableBit: ; mov eax, cr4 bts eax, 5 + + mov esi, Enable5LevelPagingLocation + cmp byte [ebx + esi], 0 + jz SkipEnable5Paging + + ; + ; Enable 5 Level Paging + ; + bts eax, 12 ; Set LA57=1. + +SkipEnable5Paging: + mov cr4, eax ; -- 2.21.0.windows.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled 2019-07-22 8:15 ` [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled Ni, Ray @ 2019-07-23 9:15 ` Laszlo Ersek 2019-07-24 7:02 ` Ni, Ray 0 siblings, 1 reply; 23+ messages in thread From: Laszlo Ersek @ 2019-07-23 9:15 UTC (permalink / raw) To: devel, ray.ni; +Cc: Eric Dong Later (after more feedback has been collected), I would like to regression-test this series; for now, just some superficial comments: On 07/22/19 10:15, Ni, Ray wrote: > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > MpInitLib is the library that's responsible to wake up APs to provide > MP PPI and Protocol services. > > The patch synchronizes BSP's CR4.LA57 to each AP's CR4.LA57. > Without this change, AP may enter to GP fault when BSP's 5-level page > table is set to AP during AP wakes up. > > Signed-off-by: Ray Ni <ray.ni@intel.com> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Laszlo Ersek <lersek@redhat.com> > --- > UefiCpuPkg/Library/MpInitLib/MpLib.c | 11 +++++++++++ > UefiCpuPkg/Library/MpInitLib/MpLib.h | 6 +++++- > UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc | 3 ++- > UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 14 +++++++++++++- > 4 files changed, 31 insertions(+), 3 deletions(-) > > diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.c b/UefiCpuPkg/Library/MpInitLib/MpLib.c > index 6f51bc4ebf..e4691315e9 100644 > --- a/UefiCpuPkg/Library/MpInitLib/MpLib.c > +++ b/UefiCpuPkg/Library/MpInitLib/MpLib.c > @@ -790,6 +790,7 @@ FillExchangeInfoData ( > volatile MP_CPU_EXCHANGE_INFO *ExchangeInfo; > UINTN Size; > IA32_SEGMENT_DESCRIPTOR *Selector; > + IA32_CR4 Cr4; > > ExchangeInfo = CpuMpData->MpCpuExchangeInfo; > ExchangeInfo->Lock = 0; > @@ -814,6 +815,16 @@ FillExchangeInfoData ( > > ExchangeInfo->InitializeFloatingPointUnitsAddress = (UINTN)InitializeFloatingPointUnits; > > + // > + // We can check either CPUID(7).ECX[bit16] or check CR4.LA57[bit12] > + // to determin whether 5-Level Paging is enabled. > + // Using latter way is simpler because it also eliminates the needs to > + // check whether platform wants to enable it. > + // > + Cr4.UintN = AsmReadCr4 (); (1) Are the above checks (CPUID and CR4) interchangeable on AMD processors too? > + ExchangeInfo->Enable5LevelPaging = (BOOLEAN) (Cr4.Bits.LA57 == 1); > + DEBUG ((DEBUG_INFO, "CpuMp: 5-Level Paging = %d\n", ExchangeInfo->Enable5LevelPaging)); > + > // > // Get the BSP's data of GDT and IDT > // (2) Quite unimportant comment, but I might as well make it: - In library code, it's best to refer to actual module names with gEfiCallerBaseName. "CpuMp" isn't ideal in this log message. - "ExchangeInfo->Enable5LevelPaging" is a UINTN; we shouldn't log it with %d. The portable logging for UINTN is to cast it to UINT64, and print it with %Lu. In this particular case, we know it's either 0 or 1, so we can print it with %d too, but then we should cast it to INT32. > diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h b/UefiCpuPkg/Library/MpInitLib/MpLib.h > index f89037c59e..fa7d6b32e9 100644 > --- a/UefiCpuPkg/Library/MpInitLib/MpLib.h > +++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h > @@ -1,7 +1,7 @@ > /** @file > Common header file for MP Initialize Library. > > - Copyright (c) 2016 - 2018, Intel Corporation. All rights reserved.<BR> > + Copyright (c) 2016 - 2019, Intel Corporation. All rights reserved.<BR> > SPDX-License-Identifier: BSD-2-Clause-Patent > > **/ > @@ -185,6 +185,10 @@ typedef struct { > UINT16 ModeTransitionSegment; > UINT32 ModeHighMemory; > UINT16 ModeHighSegment; > + // > + // Enable5LevelPaging indicates whether 5-level paging is enabled in long mode. > + // > + UINTN Enable5LevelPaging; > } MP_CPU_EXCHANGE_INFO; > > #pragma pack() > diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc b/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc > index 467f54a860..58ef369342 100644 > --- a/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc > +++ b/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc > @@ -1,5 +1,5 @@ > ;------------------------------------------------------------------------------ ; > -; Copyright (c) 2015 - 2017, Intel Corporation. All rights reserved.<BR> > +; Copyright (c) 2015 - 2019, Intel Corporation. All rights reserved.<BR> > ; SPDX-License-Identifier: BSD-2-Clause-Patent > ; > ; Module Name: > @@ -40,3 +40,4 @@ ModeTransitionMemoryLocation equ LockLocation + 94h > ModeTransitionSegmentLocation equ LockLocation + 98h > ModeHighMemoryLocation equ LockLocation + 9Ah > ModeHighSegmentLocation equ LockLocation + 9Eh > +Enable5LevelPagingLocation equ LockLocation + 0A0h (3) Any particular reason for "0A0h" rather than just "A0h"? > diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm > index cea90f3d4d..b563c2ed3e 100644 > --- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm > +++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm > @@ -1,5 +1,5 @@ > ;------------------------------------------------------------------------------ ; > -; Copyright (c) 2015 - 2018, Intel Corporation. All rights reserved.<BR> > +; Copyright (c) 2015 - 2019, Intel Corporation. All rights reserved.<BR> > ; SPDX-License-Identifier: BSD-2-Clause-Patent > ; > ; Module Name: > @@ -100,6 +100,18 @@ SkipEnableExecuteDisableBit: > ; > mov eax, cr4 > bts eax, 5 > + > + mov esi, Enable5LevelPagingLocation > + cmp byte [ebx + esi], 0 (4) If we use a byte comparison here, why don't we make the field itself a UINT8 (or even BOOLEAN)? The MP_CPU_EXCHANGE_INFO structure is packed. (If we still want to use a whole UINTN for this purpose, then I think the zero comparison should cover the whole field.) > + jz SkipEnable5Paging > + > + ; > + ; Enable 5 Level Paging > + ; > + bts eax, 12 ; Set LA57=1. > + > +SkipEnable5Paging: (5) Not too important, but we might as well be consistent with the naming elsewhere, and call this "SkipEnable5LevelPaging". Up to you. Thanks Laszlo > + > mov cr4, eax > > ; > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled 2019-07-23 9:15 ` [edk2-devel] " Laszlo Ersek @ 2019-07-24 7:02 ` Ni, Ray 0 siblings, 0 replies; 23+ messages in thread From: Ni, Ray @ 2019-07-24 7:02 UTC (permalink / raw) To: Laszlo Ersek, devel@edk2.groups.io; +Cc: Dong, Eric > -----Original Message----- > From: Laszlo Ersek <lersek@redhat.com> > Sent: Tuesday, July 23, 2019 5:15 PM > To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com> > Cc: Dong, Eric <eric.dong@intel.com> > Subject: Re: [edk2-devel] [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level > paging for AP when BSP's enabled > > Later (after more feedback has been collected), I would like to regression- > test this series; for now, just some superficial comments: > > On 07/22/19 10:15, Ni, Ray wrote: > > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > > > MpInitLib is the library that's responsible to wake up APs to provide > > MP PPI and Protocol services. > > > > The patch synchronizes BSP's CR4.LA57 to each AP's CR4.LA57. > > Without this change, AP may enter to GP fault when BSP's 5-level page > > table is set to AP during AP wakes up. > > > > Signed-off-by: Ray Ni <ray.ni@intel.com> > > Cc: Eric Dong <eric.dong@intel.com> > > Cc: Laszlo Ersek <lersek@redhat.com> > > --- > > UefiCpuPkg/Library/MpInitLib/MpLib.c | 11 +++++++++++ > > UefiCpuPkg/Library/MpInitLib/MpLib.h | 6 +++++- > > UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc | 3 ++- > > UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 14 +++++++++++++- > > 4 files changed, 31 insertions(+), 3 deletions(-) > > > > diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.c > > b/UefiCpuPkg/Library/MpInitLib/MpLib.c > > index 6f51bc4ebf..e4691315e9 100644 > > --- a/UefiCpuPkg/Library/MpInitLib/MpLib.c > > +++ b/UefiCpuPkg/Library/MpInitLib/MpLib.c > > @@ -790,6 +790,7 @@ FillExchangeInfoData ( > > volatile MP_CPU_EXCHANGE_INFO *ExchangeInfo; > > UINTN Size; > > IA32_SEGMENT_DESCRIPTOR *Selector; > > + IA32_CR4 Cr4; > > > > ExchangeInfo = CpuMpData->MpCpuExchangeInfo; > > ExchangeInfo->Lock = 0; > > @@ -814,6 +815,16 @@ FillExchangeInfoData ( > > > > ExchangeInfo->InitializeFloatingPointUnitsAddress = > > (UINTN)InitializeFloatingPointUnits; > > > > + // > > + // We can check either CPUID(7).ECX[bit16] or check CR4.LA57[bit12] > > + // to determin whether 5-Level Paging is enabled. > > + // Using latter way is simpler because it also eliminates the needs > > + to // check whether platform wants to enable it. > > + // > > + Cr4.UintN = AsmReadCr4 (); > > (1) Are the above checks (CPUID and CR4) interchangeable on AMD > processors too? I am not sure about that really. Since Intel and AMD has magically aligned each bit in control registers for quite a long period, I don't see big risk here for 5-level paging support on future AMD processors😊 > > > + ExchangeInfo->Enable5LevelPaging = (BOOLEAN) (Cr4.Bits.LA57 == 1); > > + DEBUG ((DEBUG_INFO, "CpuMp: 5-Level Paging = %d\n", > > + ExchangeInfo->Enable5LevelPaging)); > > + > > // > > // Get the BSP's data of GDT and IDT > > // > > (2) Quite unimportant comment, but I might as well make it: > > - In library code, it's best to refer to actual module names with > gEfiCallerBaseName. "CpuMp" isn't ideal in this log message. > > - "ExchangeInfo->Enable5LevelPaging" is a UINTN; we shouldn't log it > with %d. The portable logging for UINTN is to cast it to UINT64, and print it > with %Lu. In this particular case, we know it's either 0 or 1, so we can print it > with %d too, but then we should cast it to INT32. Sure. Will address them in V2. > > > diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h > > b/UefiCpuPkg/Library/MpInitLib/MpLib.h > > index f89037c59e..fa7d6b32e9 100644 > > --- a/UefiCpuPkg/Library/MpInitLib/MpLib.h > > +++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h > > @@ -1,7 +1,7 @@ > > /** @file > > Common header file for MP Initialize Library. > > > > - Copyright (c) 2016 - 2018, Intel Corporation. All rights > > reserved.<BR> > > + Copyright (c) 2016 - 2019, Intel Corporation. All rights > > + reserved.<BR> > > SPDX-License-Identifier: BSD-2-Clause-Patent > > > > **/ > > @@ -185,6 +185,10 @@ typedef struct { > > UINT16 ModeTransitionSegment; > > UINT32 ModeHighMemory; > > UINT16 ModeHighSegment; > > + // > > + // Enable5LevelPaging indicates whether 5-level paging is enabled in long > mode. > > + // > > + UINTN Enable5LevelPaging; > > } MP_CPU_EXCHANGE_INFO; > > > > #pragma pack() > > diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc > > b/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc > > index 467f54a860..58ef369342 100644 > > --- a/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc > > +++ b/UefiCpuPkg/Library/MpInitLib/X64/MpEqu.inc > > @@ -1,5 +1,5 @@ > > > > ;--------------------------------------------------------------------- > > --------- ; -; Copyright (c) 2015 - 2017, Intel Corporation. All > > rights reserved.<BR> > > +; Copyright (c) 2015 - 2019, Intel Corporation. All rights > > +reserved.<BR> > > ; SPDX-License-Identifier: BSD-2-Clause-Patent ; ; Module Name: > > @@ -40,3 +40,4 @@ ModeTransitionMemoryLocation equ > LockLocation + 94h > > ModeTransitionSegmentLocation equ LockLocation + 98h > > ModeHighMemoryLocation equ LockLocation + 9Ah > > ModeHighSegmentLocation equ LockLocation + 9Eh > > +Enable5LevelPagingLocation equ LockLocation + 0A0h > > (3) Any particular reason for "0A0h" rather than just "A0h"? Originally I had some idea to make sure "0-9" in the beginning. Now you ask, as long as build passes I don't see reason to have 0 as prefix. > > > diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm > > b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm > > index cea90f3d4d..b563c2ed3e 100644 > > --- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm > > +++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm > > @@ -1,5 +1,5 @@ > > > > ;--------------------------------------------------------------------- > > --------- ; -; Copyright (c) 2015 - 2018, Intel Corporation. All > > rights reserved.<BR> > > +; Copyright (c) 2015 - 2019, Intel Corporation. All rights > > +reserved.<BR> > > ; SPDX-License-Identifier: BSD-2-Clause-Patent ; ; Module Name: > > @@ -100,6 +100,18 @@ SkipEnableExecuteDisableBit: > > ; > > mov eax, cr4 > > bts eax, 5 > > + > > + mov esi, Enable5LevelPagingLocation > > + cmp byte [ebx + esi], 0 > > (4) If we use a byte comparison here, why don't we make the field itself a > UINT8 (or even BOOLEAN)? The MP_CPU_EXCHANGE_INFO structure is > packed. Sure. Will address them in V2. > > (If we still want to use a whole UINTN for this purpose, then I think the zero > comparison should cover the whole field.) > > > + jz SkipEnable5Paging > > + > > + ; > > + ; Enable 5 Level Paging > > + ; > > + bts eax, 12 ; Set LA57=1. > > + > > +SkipEnable5Paging: > > (5) Not too important, but we might as well be consistent with the naming > elsewhere, and call this "SkipEnable5LevelPaging". Up to you. > Sure. Will address them in V2. > Thanks > Laszlo > > > + > > mov cr4, eax > > > > ; > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing 5-level page table 2019-07-22 8:15 [PATCH 0/4] Support 5-level paging in DXE long mode Ni, Ray 2019-07-22 8:15 ` [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled Ni, Ray @ 2019-07-22 8:15 ` Ni, Ray 2019-07-23 9:22 ` [edk2-devel] " Laszlo Ersek 2019-07-22 8:15 ` [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable Ni, Ray ` (3 subsequent siblings) 5 siblings, 1 reply; 23+ messages in thread From: Ni, Ray @ 2019-07-22 8:15 UTC (permalink / raw) To: devel; +Cc: Eric Dong, Laszlo Ersek REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> --- UefiCpuPkg/CpuDxe/CpuPageTable.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTable.c index c369b44f12..8e959eb2b7 100644 --- a/UefiCpuPkg/CpuDxe/CpuPageTable.c +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c @@ -1,7 +1,7 @@ /** @file Page table management support. - Copyright (c) 2017, Intel Corporation. All rights reserved.<BR> + Copyright (c) 2017 - 2019, Intel Corporation. All rights reserved.<BR> Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> SPDX-License-Identifier: BSD-2-Clause-Patent @@ -276,25 +276,43 @@ GetPageTableEntry ( UINTN Index2; UINTN Index3; UINTN Index4; + UINTN Index5; UINT64 *L1PageTable; UINT64 *L2PageTable; UINT64 *L3PageTable; UINT64 *L4PageTable; + UINT64 *L5PageTable; UINT64 AddressEncMask; + IA32_CR4 Cr4; + BOOLEAN Enable5LevelPaging; ASSERT (PagingContext != NULL); + Index5 = ((UINTN)RShiftU64 (Address, 48)) & PAGING_PAE_INDEX_MASK; Index4 = ((UINTN)RShiftU64 (Address, 39)) & PAGING_PAE_INDEX_MASK; Index3 = ((UINTN)Address >> 30) & PAGING_PAE_INDEX_MASK; Index2 = ((UINTN)Address >> 21) & PAGING_PAE_INDEX_MASK; Index1 = ((UINTN)Address >> 12) & PAGING_PAE_INDEX_MASK; + Cr4.UintN = AsmReadCr4 (); + Enable5LevelPaging = (BOOLEAN) (Cr4.Bits.LA57 == 1); + // Make sure AddressEncMask is contained to smallest supported address field. // AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64; if (PagingContext->MachineType == IMAGE_FILE_MACHINE_X64) { - L4PageTable = (UINT64 *)(UINTN)PagingContext->ContextData.X64.PageTableBase; + if (Enable5LevelPaging) { + L5PageTable = (UINT64 *)(UINTN)PagingContext->ContextData.X64.PageTableBase; + if (L5PageTable[Index5] == 0) { + *PageAttribute = PageNone; + return NULL; + } + + L4PageTable = (UINT64 *)(UINTN)(L5PageTable[Index5] & ~AddressEncMask & PAGING_4K_ADDRESS_MASK_64); + } else { + L4PageTable = (UINT64 *)(UINTN)PagingContext->ContextData.X64.PageTableBase; + } if (L4PageTable[Index4] == 0) { *PageAttribute = PageNone; return NULL; -- 2.21.0.windows.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing 5-level page table 2019-07-22 8:15 ` [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing 5-level page table Ni, Ray @ 2019-07-23 9:22 ` Laszlo Ersek 2019-07-24 7:03 ` Ni, Ray 0 siblings, 1 reply; 23+ messages in thread From: Laszlo Ersek @ 2019-07-23 9:22 UTC (permalink / raw) To: devel, ray.ni; +Cc: Eric Dong On 07/22/19 10:15, Ni, Ray wrote: > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > Signed-off-by: Ray Ni <ray.ni@intel.com> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Laszlo Ersek <lersek@redhat.com> > --- > UefiCpuPkg/CpuDxe/CpuPageTable.c | 22 ++++++++++++++++++++-- > 1 file changed, 20 insertions(+), 2 deletions(-) > > diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTable.c > index c369b44f12..8e959eb2b7 100644 > --- a/UefiCpuPkg/CpuDxe/CpuPageTable.c > +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c > @@ -1,7 +1,7 @@ > /** @file > Page table management support. > > - Copyright (c) 2017, Intel Corporation. All rights reserved.<BR> > + Copyright (c) 2017 - 2019, Intel Corporation. All rights reserved.<BR> > Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> > > SPDX-License-Identifier: BSD-2-Clause-Patent > @@ -276,25 +276,43 @@ GetPageTableEntry ( > UINTN Index2; > UINTN Index3; > UINTN Index4; > + UINTN Index5; > UINT64 *L1PageTable; > UINT64 *L2PageTable; > UINT64 *L3PageTable; > UINT64 *L4PageTable; > + UINT64 *L5PageTable; > UINT64 AddressEncMask; > + IA32_CR4 Cr4; > + BOOLEAN Enable5LevelPaging; > > ASSERT (PagingContext != NULL); > > + Index5 = ((UINTN)RShiftU64 (Address, 48)) & PAGING_PAE_INDEX_MASK; > Index4 = ((UINTN)RShiftU64 (Address, 39)) & PAGING_PAE_INDEX_MASK; > Index3 = ((UINTN)Address >> 30) & PAGING_PAE_INDEX_MASK; > Index2 = ((UINTN)Address >> 21) & PAGING_PAE_INDEX_MASK; > Index1 = ((UINTN)Address >> 12) & PAGING_PAE_INDEX_MASK; > > + Cr4.UintN = AsmReadCr4 (); > + Enable5LevelPaging = (BOOLEAN) (Cr4.Bits.LA57 == 1); > + > // Make sure AddressEncMask is contained to smallest supported address field. > // > AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64; > > if (PagingContext->MachineType == IMAGE_FILE_MACHINE_X64) { > - L4PageTable = (UINT64 *)(UINTN)PagingContext->ContextData.X64.PageTableBase; > + if (Enable5LevelPaging) { > + L5PageTable = (UINT64 *)(UINTN)PagingContext->ContextData.X64.PageTableBase; > + if (L5PageTable[Index5] == 0) { > + *PageAttribute = PageNone; > + return NULL; > + } > + > + L4PageTable = (UINT64 *)(UINTN)(L5PageTable[Index5] & ~AddressEncMask & PAGING_4K_ADDRESS_MASK_64); > + } else { > + L4PageTable = (UINT64 *)(UINTN)PagingContext->ContextData.X64.PageTableBase; > + } > if (L4PageTable[Index4] == 0) { > *PageAttribute = PageNone; > return NULL; > The patch seems to be a no-op if Enable5LevelPaging is FALSE, so that's good. Questions: (1) Same question as under patch #1: is the CR4 check reliable on AMD processors too? (2) Should we read CR4 (or call CPUID) every time this function is invoked? Can we perform the check at CpuDxe startup, and cache the result? (3) Should we consider the PCD (from patch #3) before accessing the hardware (CR4 / CPUID alike)? Thanks Laszlo ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing 5-level page table 2019-07-23 9:22 ` [edk2-devel] " Laszlo Ersek @ 2019-07-24 7:03 ` Ni, Ray 2019-07-25 17:19 ` Laszlo Ersek 0 siblings, 1 reply; 23+ messages in thread From: Ni, Ray @ 2019-07-24 7:03 UTC (permalink / raw) To: Laszlo Ersek, devel@edk2.groups.io; +Cc: Dong, Eric > -----Original Message----- > From: Laszlo Ersek <lersek@redhat.com> > Sent: Tuesday, July 23, 2019 5:23 PM > To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com> > Cc: Dong, Eric <eric.dong@intel.com> > Subject: Re: [edk2-devel] [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing > 5-level page table > > On 07/22/19 10:15, Ni, Ray wrote: > > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > > > Signed-off-by: Ray Ni <ray.ni@intel.com> > > Cc: Eric Dong <eric.dong@intel.com> > > Cc: Laszlo Ersek <lersek@redhat.com> > > --- > > UefiCpuPkg/CpuDxe/CpuPageTable.c | 22 ++++++++++++++++++++-- > > 1 file changed, 20 insertions(+), 2 deletions(-) > > > > diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c > > b/UefiCpuPkg/CpuDxe/CpuPageTable.c > > index c369b44f12..8e959eb2b7 100644 > > --- a/UefiCpuPkg/CpuDxe/CpuPageTable.c > > +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c > > @@ -1,7 +1,7 @@ > > /** @file > > Page table management support. > > > > - Copyright (c) 2017, Intel Corporation. All rights reserved.<BR> > > + Copyright (c) 2017 - 2019, Intel Corporation. All rights > > + reserved.<BR> > > Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> > > > > SPDX-License-Identifier: BSD-2-Clause-Patent @@ -276,25 +276,43 @@ > > GetPageTableEntry ( > > UINTN Index2; > > UINTN Index3; > > UINTN Index4; > > + UINTN Index5; > > UINT64 *L1PageTable; > > UINT64 *L2PageTable; > > UINT64 *L3PageTable; > > UINT64 *L4PageTable; > > + UINT64 *L5PageTable; > > UINT64 AddressEncMask; > > + IA32_CR4 Cr4; > > + BOOLEAN Enable5LevelPaging; > > > > ASSERT (PagingContext != NULL); > > > > + Index5 = ((UINTN)RShiftU64 (Address, 48)) & > PAGING_PAE_INDEX_MASK; > > Index4 = ((UINTN)RShiftU64 (Address, 39)) & > PAGING_PAE_INDEX_MASK; > > Index3 = ((UINTN)Address >> 30) & PAGING_PAE_INDEX_MASK; > > Index2 = ((UINTN)Address >> 21) & PAGING_PAE_INDEX_MASK; > > Index1 = ((UINTN)Address >> 12) & PAGING_PAE_INDEX_MASK; > > > > + Cr4.UintN = AsmReadCr4 (); > > + Enable5LevelPaging = (BOOLEAN) (Cr4.Bits.LA57 == 1); > > + > > // Make sure AddressEncMask is contained to smallest supported address > field. > > // > > AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) > & > > PAGING_1G_ADDRESS_MASK_64; > > > > if (PagingContext->MachineType == IMAGE_FILE_MACHINE_X64) { > > - L4PageTable = (UINT64 *)(UINTN)PagingContext- > >ContextData.X64.PageTableBase; > > + if (Enable5LevelPaging) { > > + L5PageTable = (UINT64 *)(UINTN)PagingContext- > >ContextData.X64.PageTableBase; > > + if (L5PageTable[Index5] == 0) { > > + *PageAttribute = PageNone; > > + return NULL; > > + } > > + > > + L4PageTable = (UINT64 *)(UINTN)(L5PageTable[Index5] & > ~AddressEncMask & PAGING_4K_ADDRESS_MASK_64); > > + } else { > > + L4PageTable = (UINT64 *)(UINTN)PagingContext- > >ContextData.X64.PageTableBase; > > + } > > if (L4PageTable[Index4] == 0) { > > *PageAttribute = PageNone; > > return NULL; > > > > The patch seems to be a no-op if Enable5LevelPaging is FALSE, so that's good. > > Questions: > > (1) Same question as under patch #1: is the CR4 check reliable on AMD > processors too? Replied in 1/4 thread. > > (2) Should we read CR4 (or call CPUID) every time this function is invoked? > Can we perform the check at CpuDxe startup, and cache the result? It's a good suggestion. Will address it in V2. > > (3) Should we consider the PCD (from patch #3) before accessing the > hardware (CR4 / CPUID alike)? I want to avoid touching PCD in this CPU driver. All the information is taken in DxeIpl and pass to CPU driver through Cr4.LA57. > > Thanks > Laszlo ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing 5-level page table 2019-07-24 7:03 ` Ni, Ray @ 2019-07-25 17:19 ` Laszlo Ersek 0 siblings, 0 replies; 23+ messages in thread From: Laszlo Ersek @ 2019-07-25 17:19 UTC (permalink / raw) To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric On 07/24/19 09:03, Ni, Ray wrote: > > >> -----Original Message----- >> From: Laszlo Ersek <lersek@redhat.com> >> Sent: Tuesday, July 23, 2019 5:23 PM >> To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com> >> Cc: Dong, Eric <eric.dong@intel.com> >> Subject: Re: [edk2-devel] [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing >> 5-level page table >> >> On 07/22/19 10:15, Ni, Ray wrote: >>> REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 >>> >>> Signed-off-by: Ray Ni <ray.ni@intel.com> >>> Cc: Eric Dong <eric.dong@intel.com> >>> Cc: Laszlo Ersek <lersek@redhat.com> >>> --- >>> UefiCpuPkg/CpuDxe/CpuPageTable.c | 22 ++++++++++++++++++++-- >>> 1 file changed, 20 insertions(+), 2 deletions(-) >>> >>> diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c >>> b/UefiCpuPkg/CpuDxe/CpuPageTable.c >>> index c369b44f12..8e959eb2b7 100644 >>> --- a/UefiCpuPkg/CpuDxe/CpuPageTable.c >>> +++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c >>> @@ -1,7 +1,7 @@ >>> /** @file >>> Page table management support. >>> >>> - Copyright (c) 2017, Intel Corporation. All rights reserved.<BR> >>> + Copyright (c) 2017 - 2019, Intel Corporation. All rights >>> + reserved.<BR> >>> Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> >>> >>> SPDX-License-Identifier: BSD-2-Clause-Patent @@ -276,25 +276,43 @@ >>> GetPageTableEntry ( >>> UINTN Index2; >>> UINTN Index3; >>> UINTN Index4; >>> + UINTN Index5; >>> UINT64 *L1PageTable; >>> UINT64 *L2PageTable; >>> UINT64 *L3PageTable; >>> UINT64 *L4PageTable; >>> + UINT64 *L5PageTable; >>> UINT64 AddressEncMask; >>> + IA32_CR4 Cr4; >>> + BOOLEAN Enable5LevelPaging; >>> >>> ASSERT (PagingContext != NULL); >>> >>> + Index5 = ((UINTN)RShiftU64 (Address, 48)) & >> PAGING_PAE_INDEX_MASK; >>> Index4 = ((UINTN)RShiftU64 (Address, 39)) & >> PAGING_PAE_INDEX_MASK; >>> Index3 = ((UINTN)Address >> 30) & PAGING_PAE_INDEX_MASK; >>> Index2 = ((UINTN)Address >> 21) & PAGING_PAE_INDEX_MASK; >>> Index1 = ((UINTN)Address >> 12) & PAGING_PAE_INDEX_MASK; >>> >>> + Cr4.UintN = AsmReadCr4 (); >>> + Enable5LevelPaging = (BOOLEAN) (Cr4.Bits.LA57 == 1); >>> + >>> // Make sure AddressEncMask is contained to smallest supported address >> field. >>> // >>> AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) >> & >>> PAGING_1G_ADDRESS_MASK_64; >>> >>> if (PagingContext->MachineType == IMAGE_FILE_MACHINE_X64) { >>> - L4PageTable = (UINT64 *)(UINTN)PagingContext- >>> ContextData.X64.PageTableBase; >>> + if (Enable5LevelPaging) { >>> + L5PageTable = (UINT64 *)(UINTN)PagingContext- >>> ContextData.X64.PageTableBase; >>> + if (L5PageTable[Index5] == 0) { >>> + *PageAttribute = PageNone; >>> + return NULL; >>> + } >>> + >>> + L4PageTable = (UINT64 *)(UINTN)(L5PageTable[Index5] & >> ~AddressEncMask & PAGING_4K_ADDRESS_MASK_64); >>> + } else { >>> + L4PageTable = (UINT64 *)(UINTN)PagingContext- >>> ContextData.X64.PageTableBase; >>> + } >>> if (L4PageTable[Index4] == 0) { >>> *PageAttribute = PageNone; >>> return NULL; >>> >> >> The patch seems to be a no-op if Enable5LevelPaging is FALSE, so that's good. >> >> Questions: >> >> (1) Same question as under patch #1: is the CR4 check reliable on AMD >> processors too? > > Replied in 1/4 thread. > >> >> (2) Should we read CR4 (or call CPUID) every time this function is invoked? >> Can we perform the check at CpuDxe startup, and cache the result? > It's a good suggestion. Will address it in V2. > > >> >> (3) Should we consider the PCD (from patch #3) before accessing the >> hardware (CR4 / CPUID alike)? > I want to avoid touching PCD in this CPU driver. > All the information is taken in DxeIpl and pass to CPU driver through Cr4.LA57. Makes sense, thanks. Laszlo ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable 2019-07-22 8:15 [PATCH 0/4] Support 5-level paging in DXE long mode Ni, Ray 2019-07-22 8:15 ` [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled Ni, Ray 2019-07-22 8:15 ` [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing 5-level page table Ni, Ray @ 2019-07-22 8:15 ` Ni, Ray 2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A 2019-07-24 2:02 ` Dong, Eric 2019-07-22 8:15 ` [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode Ni, Ray ` (2 subsequent siblings) 5 siblings, 2 replies; 23+ messages in thread From: Ni, Ray @ 2019-07-22 8:15 UTC (permalink / raw) To: devel; +Cc: Eric Dong, Laszlo Ersek REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 The PCD indicates if 5-Level Paging will be enabled in long mode. 5-Level Paging will not be enabled when the PCD is TRUE but CPU doesn't support 5-Level Paging. Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> --- MdeModulePkg/MdeModulePkg.dec | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec index 12e0bbf579..21388595a9 100644 --- a/MdeModulePkg/MdeModulePkg.dec +++ b/MdeModulePkg/MdeModulePkg.dec @@ -1991,6 +1991,13 @@ [PcdsFixedAtBuild, PcdsPatchableInModule, PcdsDynamic, PcdsDynamicEx] # @Prompt The address mask when memory encryption is enabled. gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask|0x0|UINT64|0x30001047 + ## Indicates if 5-Level Paging will be enabled in long mode. 5-Level Paging will not be enabled + # when the PCD is TRUE but CPU doesn't support 5-Level Paging. + # TRUE - 5-Level Paging will be enabled.<BR> + # FALSE - 5-Level Paging will not be enabled.<BR> + # @Prompt Enable 5-Level Paging support in long mode. + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable|FALSE|BOOLEAN|0x0001105F + ## Capsule In Ram is to use memory to deliver the capsules that will be processed after system # reset.<BR><BR> # This PCD indicates if the Capsule In Ram is supported.<BR> -- 2.21.0.windows.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable 2019-07-22 8:15 ` [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable Ni, Ray @ 2019-07-23 2:05 ` Wu, Hao A 2019-07-23 14:20 ` Ni, Ray 2019-07-24 2:02 ` Dong, Eric 1 sibling, 1 reply; 23+ messages in thread From: Wu, Hao A @ 2019-07-23 2:05 UTC (permalink / raw) To: devel@edk2.groups.io, Ni, Ray; +Cc: Dong, Eric, Laszlo Ersek > -----Original Message----- > From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf Of Ni, > Ray > Sent: Monday, July 22, 2019 4:16 PM > To: devel@edk2.groups.io > Cc: Dong, Eric; Laszlo Ersek > Subject: [edk2-devel] [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD > PcdUse5LevelPageTable > > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > The PCD indicates if 5-Level Paging will be enabled in long mode. > 5-Level Paging will not be enabled when the PCD is TRUE but CPU > doesn't support 5-Level Paging. > > Signed-off-by: Ray Ni <ray.ni@intel.com> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Laszlo Ersek <lersek@redhat.com> > --- > MdeModulePkg/MdeModulePkg.dec | 7 +++++++ Hello, Please help to add the PCD description string definitions in MdeModulePkg.uni. One question, is this PCD introduced for the consideration of memory consumption by the page table entries? Or is there any other purpose? Best Regards, Hao Wu > 1 file changed, 7 insertions(+) > > diff --git a/MdeModulePkg/MdeModulePkg.dec > b/MdeModulePkg/MdeModulePkg.dec > index 12e0bbf579..21388595a9 100644 > --- a/MdeModulePkg/MdeModulePkg.dec > +++ b/MdeModulePkg/MdeModulePkg.dec > @@ -1991,6 +1991,13 @@ [PcdsFixedAtBuild, PcdsPatchableInModule, > PcdsDynamic, PcdsDynamicEx] > # @Prompt The address mask when memory encryption is enabled. > > gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM > ask|0x0|UINT64|0x30001047 > > + ## Indicates if 5-Level Paging will be enabled in long mode. 5-Level Paging > will not be enabled > + # when the PCD is TRUE but CPU doesn't support 5-Level Paging. > + # TRUE - 5-Level Paging will be enabled.<BR> > + # FALSE - 5-Level Paging will not be enabled.<BR> > + # @Prompt Enable 5-Level Paging support in long mode. > + > gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable|FALSE|BOOLE > AN|0x0001105F > + > ## Capsule In Ram is to use memory to deliver the capsules that will be > processed after system > # reset.<BR><BR> > # This PCD indicates if the Capsule In Ram is supported.<BR> > -- > 2.21.0.windows.1 > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable 2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A @ 2019-07-23 14:20 ` Ni, Ray 0 siblings, 0 replies; 23+ messages in thread From: Ni, Ray @ 2019-07-23 14:20 UTC (permalink / raw) To: Wu, Hao A, devel@edk2.groups.io; +Cc: Dong, Eric, Laszlo Ersek Hao, Certain OS loaders may fail to boot when 5-level paging is used. PCD can be used to restrict to 4-level paging. > -----Original Message----- > From: Wu, Hao A > Sent: Tuesday, July 23, 2019 10:06 AM > To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com> > Cc: Dong, Eric <eric.dong@intel.com>; Laszlo Ersek <lersek@redhat.com> > Subject: RE: [edk2-devel] [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable > > > -----Original Message----- > > From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf Of Ni, > > Ray > > Sent: Monday, July 22, 2019 4:16 PM > > To: devel@edk2.groups.io > > Cc: Dong, Eric; Laszlo Ersek > > Subject: [edk2-devel] [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD > > PcdUse5LevelPageTable > > > > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > > > The PCD indicates if 5-Level Paging will be enabled in long mode. > > 5-Level Paging will not be enabled when the PCD is TRUE but CPU > > doesn't support 5-Level Paging. > > > > Signed-off-by: Ray Ni <ray.ni@intel.com> > > Cc: Eric Dong <eric.dong@intel.com> > > Cc: Laszlo Ersek <lersek@redhat.com> > > --- > > MdeModulePkg/MdeModulePkg.dec | 7 +++++++ > > > Hello, > > Please help to add the PCD description string definitions in MdeModulePkg.uni. > > One question, is this PCD introduced for the consideration of memory > consumption by the page table entries? Or is there any other purpose? > > Best Regards, > Hao Wu > > > > 1 file changed, 7 insertions(+) > > > > diff --git a/MdeModulePkg/MdeModulePkg.dec > > b/MdeModulePkg/MdeModulePkg.dec > > index 12e0bbf579..21388595a9 100644 > > --- a/MdeModulePkg/MdeModulePkg.dec > > +++ b/MdeModulePkg/MdeModulePkg.dec > > @@ -1991,6 +1991,13 @@ [PcdsFixedAtBuild, PcdsPatchableInModule, > > PcdsDynamic, PcdsDynamicEx] > > # @Prompt The address mask when memory encryption is enabled. > > > > gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM > > ask|0x0|UINT64|0x30001047 > > > > + ## Indicates if 5-Level Paging will be enabled in long mode. 5-Level Paging > > will not be enabled > > + # when the PCD is TRUE but CPU doesn't support 5-Level Paging. > > + # TRUE - 5-Level Paging will be enabled.<BR> > > + # FALSE - 5-Level Paging will not be enabled.<BR> > > + # @Prompt Enable 5-Level Paging support in long mode. > > + > > gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable|FALSE|BOOLE > > AN|0x0001105F > > + > > ## Capsule In Ram is to use memory to deliver the capsules that will be > > processed after system > > # reset.<BR><BR> > > # This PCD indicates if the Capsule In Ram is supported.<BR> > > -- > > 2.21.0.windows.1 > > > > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable 2019-07-22 8:15 ` [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable Ni, Ray 2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A @ 2019-07-24 2:02 ` Dong, Eric 1 sibling, 0 replies; 23+ messages in thread From: Dong, Eric @ 2019-07-24 2:02 UTC (permalink / raw) To: Ni, Ray, devel@edk2.groups.io; +Cc: Laszlo Ersek Reviewed-by: Eric Dong <eric.dong@intel.com> > -----Original Message----- > From: Ni, Ray > Sent: Monday, July 22, 2019 4:16 PM > To: devel@edk2.groups.io > Cc: Dong, Eric <eric.dong@intel.com>; Laszlo Ersek <lersek@redhat.com> > Subject: [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD > PcdUse5LevelPageTable > > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > The PCD indicates if 5-Level Paging will be enabled in long mode. > 5-Level Paging will not be enabled when the PCD is TRUE but CPU doesn't > support 5-Level Paging. > > Signed-off-by: Ray Ni <ray.ni@intel.com> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Laszlo Ersek <lersek@redhat.com> > --- > MdeModulePkg/MdeModulePkg.dec | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/MdeModulePkg/MdeModulePkg.dec > b/MdeModulePkg/MdeModulePkg.dec index 12e0bbf579..21388595a9 > 100644 > --- a/MdeModulePkg/MdeModulePkg.dec > +++ b/MdeModulePkg/MdeModulePkg.dec > @@ -1991,6 +1991,13 @@ [PcdsFixedAtBuild, PcdsPatchableInModule, > PcdsDynamic, PcdsDynamicEx] > # @Prompt The address mask when memory encryption is enabled. > > gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM > ask|0x0|UINT64|0x30001047 > > + ## Indicates if 5-Level Paging will be enabled in long mode. 5-Level > + Paging will not be enabled # when the PCD is TRUE but CPU doesn't > support 5-Level Paging. > + # TRUE - 5-Level Paging will be enabled.<BR> > + # FALSE - 5-Level Paging will not be enabled.<BR> > + # @Prompt Enable 5-Level Paging support in long mode. > + > + > gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable|FALSE|BOOLE > AN|0x0 > + 001105F > + > ## Capsule In Ram is to use memory to deliver the capsules that will be > processed after system > # reset.<BR><BR> > # This PCD indicates if the Capsule In Ram is supported.<BR> > -- > 2.21.0.windows.1 ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode 2019-07-22 8:15 [PATCH 0/4] Support 5-level paging in DXE long mode Ni, Ray ` (2 preceding siblings ...) 2019-07-22 8:15 ` [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable Ni, Ray @ 2019-07-22 8:15 ` Ni, Ray 2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A 2019-07-23 9:46 ` Laszlo Ersek [not found] ` <15B3ACB4E8DDF416.7925@groups.io> [not found] ` <15B3ACB536E52165.29669@groups.io> 5 siblings, 2 replies; 23+ messages in thread From: Ni, Ray @ 2019-07-22 8:15 UTC (permalink / raw) To: devel; +Cc: Eric Dong, Laszlo Ersek REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 DxeIpl is responsible to create page table for DXE phase running either in long mode or in 32bit mode with certain protection mechanism enabled (refer to ToBuildPageTable()). The patch updates DxeIpl to create 5-level page table for DXE phase running in long mode when PcdUse5LevelPageTable is TRUE and CPU supports 5-level page table. Signed-off-by: Ray Ni <ray.ni@intel.com> Cc: Eric Dong <eric.dong@intel.com> Cc: Laszlo Ersek <lersek@redhat.com> --- MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ 2 files changed, 151 insertions(+), 77 deletions(-) diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf index abc3217b01..98bc17fc9d 100644 --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask ## CONSUMES gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask ## CONSUMES gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## CONSUMES + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## SOMETIMES_CONSUMES [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## SOMETIMES_CONSUMES diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c index edc38e4525..a5bcdcc734 100644 --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c @@ -15,7 +15,7 @@ 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume 2:Instruction Set Reference, Intel 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume 3:System Programmer's Guide, Intel -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR> +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR> Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> SPDX-License-Identifier: BSD-2-Clause-Patent @@ -626,14 +626,19 @@ CreateIdentityMappingPageTables ( ) { UINT32 RegEax; + UINT32 RegEbx; + UINT32 RegEcx; UINT32 RegEdx; UINT8 PhysicalAddressBits; EFI_PHYSICAL_ADDRESS PageAddress; + UINTN IndexOfPml5Entries; UINTN IndexOfPml4Entries; UINTN IndexOfPdpEntries; UINTN IndexOfPageDirectoryEntries; + UINT32 NumberOfPml5EntriesNeeded; UINT32 NumberOfPml4EntriesNeeded; UINT32 NumberOfPdpEntriesNeeded; + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry; PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry; PAGE_MAP_AND_DIRECTORY_POINTER *PageMap; PAGE_MAP_AND_DIRECTORY_POINTER *PageDirectoryPointerEntry; @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables ( UINTN TotalPagesNum; UINTN BigPageAddress; VOID *Hob; + BOOLEAN Page5LevelSupport; BOOLEAN Page1GSupport; PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry; UINT64 AddressEncMask; + IA32_CR4 Cr4; // // Make sure AddressEncMask is contained to smallest supported address field @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables ( } } + Page5LevelSupport = FALSE; + if (PcdGetBool (PcdUse5LevelPageTable)) { + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx); + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, RegEbx, RegEcx, RegEdx)); + if ((RegEcx & BIT16) != 0) { + Page5LevelSupport = TRUE; + } + } + + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage = %d/%d/%d\n", PhysicalAddressBits, Page5LevelSupport, Page1GSupport)); + // - // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses. + // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses + // when 5-Level Paging is disabled, + // due to either unsupported by HW, or disabled by PCD. // ASSERT (PhysicalAddressBits <= 52); - if (PhysicalAddressBits > 48) { + if (!Page5LevelSupport && PhysicalAddressBits > 48) { PhysicalAddressBits = 48; } // // Calculate the table entries needed. // - if (PhysicalAddressBits <= 39 ) { - NumberOfPml4EntriesNeeded = 1; - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 30)); - } else { - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 39)); - NumberOfPdpEntriesNeeded = 512; + NumberOfPml5EntriesNeeded = 1; + if (PhysicalAddressBits > 48) { + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 48); + PhysicalAddressBits = 48; } + NumberOfPml4EntriesNeeded = 1; + if (PhysicalAddressBits > 39) { + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 39); + PhysicalAddressBits = 39; + } + + NumberOfPdpEntriesNeeded = 1; + ASSERT (PhysicalAddressBits > 30); + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 30); + // // Pre-allocate big pages to avoid later allocations. // if (!Page1GSupport) { - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1; + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; } else { - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; + } + + // + // Substract the one page occupied by PML5 entries if 5-Level Paging is disabled. + // + if (!Page5LevelSupport) { + TotalPagesNum--; } + + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n", + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded, + NumberOfPdpEntriesNeeded, TotalPagesNum)); + BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum); ASSERT (BigPageAddress != 0); @@ -711,92 +751,125 @@ CreateIdentityMappingPageTables ( // By architecture only one PageMapLevel4 exists - so lets allocate storage for it. // PageMap = (VOID *) BigPageAddress; - BigPageAddress += SIZE_4KB; - - PageMapLevel4Entry = PageMap; - PageAddress = 0; - for (IndexOfPml4Entries = 0; IndexOfPml4Entries < NumberOfPml4EntriesNeeded; IndexOfPml4Entries++, PageMapLevel4Entry++) { + if (Page5LevelSupport) { // - // Each PML4 entry points to a page of Page Directory Pointer entires. - // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop. + // By architecture only one PageMapLevel5 exists - so lets allocate storage for it. // - PageDirectoryPointerEntry = (VOID *) BigPageAddress; - BigPageAddress += SIZE_4KB; + PageMapLevel5Entry = PageMap; + BigPageAddress += SIZE_4KB; + } + PageAddress = 0; + for ( IndexOfPml5Entries = 0 + ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded + ; IndexOfPml5Entries++, PageMapLevel5Entry++) { // - // Make a PML4 Entry + // Each PML5 entry points to a page of PML4 entires. + // So lets allocate space for them and fill them in in the IndexOfPml4Entries loop. + // When 5-Level Paging is disabled, below allocation happens only once. // - PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; - PageMapLevel4Entry->Bits.ReadWrite = 1; - PageMapLevel4Entry->Bits.Present = 1; + PageMapLevel4Entry = (VOID *) BigPageAddress; + BigPageAddress += SIZE_4KB; - if (Page1GSupport) { - PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; + if (Page5LevelSupport) { + // + // Make a PML5 Entry + // + PageMapLevel5Entry->Uint64 = (UINT64) (UINTN) PageMapLevel4Entry; + PageMapLevel5Entry->Bits.ReadWrite = 1; + PageMapLevel5Entry->Bits.Present = 1; + } - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += SIZE_1GB) { - if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { - Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize); - } else { - // - // Fill in the Page Directory entries - // - PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; - PageDirectory1GEntry->Bits.ReadWrite = 1; - PageDirectory1GEntry->Bits.Present = 1; - PageDirectory1GEntry->Bits.MustBe1 = 1; - } - } - } else { - for (IndexOfPdpEntries = 0; IndexOfPdpEntries < NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { - // - // Each Directory Pointer entries points to a page of Page Directory entires. - // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop. - // - PageDirectoryEntry = (VOID *) BigPageAddress; - BigPageAddress += SIZE_4KB; + for ( IndexOfPml4Entries = 0 + ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512) + ; IndexOfPml4Entries++, PageMapLevel4Entry++) { + // + // Each PML4 entry points to a page of Page Directory Pointer entires. + // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop. + // + PageDirectoryPointerEntry = (VOID *) BigPageAddress; + BigPageAddress += SIZE_4KB; - // - // Fill in a Page Directory Pointer Entries - // - PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; - PageDirectoryPointerEntry->Bits.ReadWrite = 1; - PageDirectoryPointerEntry->Bits.Present = 1; + // + // Make a PML4 Entry + // + PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; + PageMapLevel4Entry->Bits.ReadWrite = 1; + PageMapLevel4Entry->Bits.Present = 1; - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) { - if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { - // - // Need to split this 2M page that covers NULL or stack range. - // - Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize); + if (Page1GSupport) { + PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; + + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += SIZE_1GB) { + if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { + Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize); } else { // // Fill in the Page Directory entries // - PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; - PageDirectoryEntry->Bits.ReadWrite = 1; - PageDirectoryEntry->Bits.Present = 1; - PageDirectoryEntry->Bits.MustBe1 = 1; + PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; + PageDirectory1GEntry->Bits.ReadWrite = 1; + PageDirectory1GEntry->Bits.Present = 1; + PageDirectory1GEntry->Bits.MustBe1 = 1; } } - } + } else { + for ( IndexOfPdpEntries = 0 + ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512) + ; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { + // + // Each Directory Pointer entries points to a page of Page Directory entires. + // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop. + // + PageDirectoryEntry = (VOID *) BigPageAddress; + BigPageAddress += SIZE_4KB; - for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { - ZeroMem ( - PageDirectoryPointerEntry, - sizeof(PAGE_MAP_AND_DIRECTORY_POINTER) - ); + // + // Fill in a Page Directory Pointer Entries + // + PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; + PageDirectoryPointerEntry->Bits.ReadWrite = 1; + PageDirectoryPointerEntry->Bits.Present = 1; + + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) { + if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { + // + // Need to split this 2M page that covers NULL or stack range. + // + Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize); + } else { + // + // Fill in the Page Directory entries + // + PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; + PageDirectoryEntry->Bits.ReadWrite = 1; + PageDirectoryEntry->Bits.Present = 1; + PageDirectoryEntry->Bits.MustBe1 = 1; + } + } + } + + // + // Fill with null entry for unused PDPTE + // + ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * sizeof(PAGE_MAP_AND_DIRECTORY_POINTER)); } } + + // + // For the PML4 entries we are not using fill in a null entry. + // + ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)); } - // - // For the PML4 entries we are not using fill in a null entry. - // - for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, PageMapLevel4Entry++) { - ZeroMem ( - PageMapLevel4Entry, - sizeof (PAGE_MAP_AND_DIRECTORY_POINTER) - ); + if (Page5LevelSupport) { + Cr4.UintN = AsmReadCr4 (); + Cr4.Bits.LA57 = 1; + AsmWriteCr4 (Cr4.UintN); + // + // For the PML5 entries we are not using fill in a null entry. + // + ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)); } // -- 2.21.0.windows.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode 2019-07-22 8:15 ` [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode Ni, Ray @ 2019-07-23 2:05 ` Wu, Hao A 2019-07-23 7:48 ` Laszlo Ersek 2019-07-23 9:46 ` Laszlo Ersek 1 sibling, 1 reply; 23+ messages in thread From: Wu, Hao A @ 2019-07-23 2:05 UTC (permalink / raw) To: devel@edk2.groups.io, Ni, Ray; +Cc: Dong, Eric, Laszlo Ersek > -----Original Message----- > From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf Of Ni, > Ray > Sent: Monday, July 22, 2019 4:16 PM > To: devel@edk2.groups.io > Cc: Dong, Eric; Laszlo Ersek > Subject: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level > page table for long mode > > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > DxeIpl is responsible to create page table for DXE phase running > either in long mode or in 32bit mode with certain protection > mechanism enabled (refer to ToBuildPageTable()). > > The patch updates DxeIpl to create 5-level page table for DXE phase > running in long mode when PcdUse5LevelPageTable is TRUE and CPU > supports 5-level page table. > > Signed-off-by: Ray Ni <ray.ni@intel.com> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Laszlo Ersek <lersek@redhat.com> > --- > MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + > .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ > 2 files changed, 151 insertions(+), 77 deletions(-) > > diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > index abc3217b01..98bc17fc9d 100644 > --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] > > gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask > ## CONSUMES > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask > ## CONSUMES > gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## > CONSUMES > + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## > SOMETIMES_CONSUMES > > [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] > gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## > SOMETIMES_CONSUMES > diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > index edc38e4525..a5bcdcc734 100644 > --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > @@ -15,7 +15,7 @@ > 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume > 2:Instruction Set Reference, Intel > 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume > 3:System Programmer's Guide, Intel > > -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR> > +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR> > Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> > > SPDX-License-Identifier: BSD-2-Clause-Patent > @@ -626,14 +626,19 @@ CreateIdentityMappingPageTables ( > ) > { > UINT32 RegEax; > + UINT32 RegEbx; > + UINT32 RegEcx; > UINT32 RegEdx; > UINT8 PhysicalAddressBits; > EFI_PHYSICAL_ADDRESS PageAddress; > + UINTN IndexOfPml5Entries; > UINTN IndexOfPml4Entries; > UINTN IndexOfPdpEntries; > UINTN IndexOfPageDirectoryEntries; > + UINT32 NumberOfPml5EntriesNeeded; > UINT32 NumberOfPml4EntriesNeeded; > UINT32 NumberOfPdpEntriesNeeded; > + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry; > PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry; > PAGE_MAP_AND_DIRECTORY_POINTER *PageMap; > PAGE_MAP_AND_DIRECTORY_POINTER > *PageDirectoryPointerEntry; > @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables ( > UINTN TotalPagesNum; > UINTN BigPageAddress; > VOID *Hob; > + BOOLEAN Page5LevelSupport; > BOOLEAN Page1GSupport; > PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry; > UINT64 AddressEncMask; > + IA32_CR4 Cr4; > > // > // Make sure AddressEncMask is contained to smallest supported address > field > @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables ( > } > } > > + Page5LevelSupport = FALSE; > + if (PcdGetBool (PcdUse5LevelPageTable)) { > + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx); > + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, > RegEbx, RegEcx, RegEdx)); > + if ((RegEcx & BIT16) != 0) { > + Page5LevelSupport = TRUE; > + } > + } > + > + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage > = %d/%d/%d\n", PhysicalAddressBits, Page5LevelSupport, Page1GSupport)); > + > // > - // IA-32e paging translates 48-bit linear addresses to 52-bit physical > addresses. > + // IA-32e paging translates 48-bit linear addresses to 52-bit physical > addresses > + // when 5-Level Paging is disabled, > + // due to either unsupported by HW, or disabled by PCD. > // > ASSERT (PhysicalAddressBits <= 52); > - if (PhysicalAddressBits > 48) { > + if (!Page5LevelSupport && PhysicalAddressBits > 48) { > PhysicalAddressBits = 48; > } > > // > // Calculate the table entries needed. > // > - if (PhysicalAddressBits <= 39 ) { > - NumberOfPml4EntriesNeeded = 1; > - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, > (PhysicalAddressBits - 30)); > - } else { > - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, > (PhysicalAddressBits - 39)); > - NumberOfPdpEntriesNeeded = 512; > + NumberOfPml5EntriesNeeded = 1; > + if (PhysicalAddressBits > 48) { > + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, > PhysicalAddressBits - 48); > + PhysicalAddressBits = 48; > } > > + NumberOfPml4EntriesNeeded = 1; > + if (PhysicalAddressBits > 39) { > + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, > PhysicalAddressBits - 39); > + PhysicalAddressBits = 39; > + } > + > + NumberOfPdpEntriesNeeded = 1; > + ASSERT (PhysicalAddressBits > 30); > + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits > - 30); > + > // > // Pre-allocate big pages to avoid later allocations. > // > if (!Page1GSupport) { > - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * > NumberOfPml4EntriesNeeded + 1; > + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * > NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; > } else { > - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; > + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * > NumberOfPml5EntriesNeeded + 1; > + } > + > + // > + // Substract the one page occupied by PML5 entries if 5-Level Paging is > disabled. > + // > + if (!Page5LevelSupport) { > + TotalPagesNum--; > } > + > + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n", > + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded, > + NumberOfPdpEntriesNeeded, TotalPagesNum)); > + > BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum); > ASSERT (BigPageAddress != 0); > > @@ -711,92 +751,125 @@ CreateIdentityMappingPageTables ( > // By architecture only one PageMapLevel4 exists - so lets allocate storage > for it. > // > PageMap = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > - > - PageMapLevel4Entry = PageMap; > - PageAddress = 0; > - for (IndexOfPml4Entries = 0; IndexOfPml4Entries < > NumberOfPml4EntriesNeeded; IndexOfPml4Entries++, > PageMapLevel4Entry++) { > + if (Page5LevelSupport) { > // > - // Each PML4 entry points to a page of Page Directory Pointer entires. > - // So lets allocate space for them and fill them in in the IndexOfPdpEntries > loop. > + // By architecture only one PageMapLevel5 exists - so lets allocate storage > for it. > // > - PageDirectoryPointerEntry = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > + PageMapLevel5Entry = PageMap; > + BigPageAddress += SIZE_4KB; > + } > + PageAddress = 0; > > + for ( IndexOfPml5Entries = 0 > + ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded > + ; IndexOfPml5Entries++, PageMapLevel5Entry++) { > // > - // Make a PML4 Entry > + // Each PML5 entry points to a page of PML4 entires. > + // So lets allocate space for them and fill them in in the > IndexOfPml4Entries loop. > + // When 5-Level Paging is disabled, below allocation happens only once. > // > - PageMapLevel4Entry->Uint64 = > (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > - PageMapLevel4Entry->Bits.ReadWrite = 1; > - PageMapLevel4Entry->Bits.Present = 1; > + PageMapLevel4Entry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - if (Page1GSupport) { > - PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > + if (Page5LevelSupport) { > + // > + // Make a PML5 Entry > + // > + PageMapLevel5Entry->Uint64 = (UINT64) (UINTN) PageMapLevel4Entry; Hello, Do we need to bitwise-or with 'AddressEncMask' here? Best Regards, Hao Wu > + PageMapLevel5Entry->Bits.ReadWrite = 1; > + PageMapLevel5Entry->Bits.Present = 1; > + } > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; > IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += > SIZE_1GB) { > - if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > - Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, > StackBase, StackSize); > - } else { > - // > - // Fill in the Page Directory entries > - // > - PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | > AddressEncMask; > - PageDirectory1GEntry->Bits.ReadWrite = 1; > - PageDirectory1GEntry->Bits.Present = 1; > - PageDirectory1GEntry->Bits.MustBe1 = 1; > - } > - } > - } else { > - for (IndexOfPdpEntries = 0; IndexOfPdpEntries < > NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, > PageDirectoryPointerEntry++) { > - // > - // Each Directory Pointer entries points to a page of Page Directory > entires. > - // So allocate space for them and fill them in in the > IndexOfPageDirectoryEntries loop. > - // > - PageDirectoryEntry = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > + for ( IndexOfPml4Entries = 0 > + ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? > NumberOfPml4EntriesNeeded : 512) > + ; IndexOfPml4Entries++, PageMapLevel4Entry++) { > + // > + // Each PML4 entry points to a page of Page Directory Pointer entires. > + // So lets allocate space for them and fill them in in the > IndexOfPdpEntries loop. > + // > + PageDirectoryPointerEntry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - // > - // Fill in a Page Directory Pointer Entries > - // > - PageDirectoryPointerEntry->Uint64 = > (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > - PageDirectoryPointerEntry->Bits.ReadWrite = 1; > - PageDirectoryPointerEntry->Bits.Present = 1; > + // > + // Make a PML4 Entry > + // > + PageMapLevel4Entry->Uint64 = > (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > + PageMapLevel4Entry->Bits.ReadWrite = 1; > + PageMapLevel4Entry->Bits.Present = 1; > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < > 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += > SIZE_2MB) { > - if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > - // > - // Need to split this 2M page that covers NULL or stack range. > - // > - Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, > StackBase, StackSize); > + if (Page1GSupport) { > + PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > + > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < > 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress > += SIZE_1GB) { > + if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > + Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, > StackBase, StackSize); > } else { > // > // Fill in the Page Directory entries > // > - PageDirectoryEntry->Uint64 = (UINT64)PageAddress | > AddressEncMask; > - PageDirectoryEntry->Bits.ReadWrite = 1; > - PageDirectoryEntry->Bits.Present = 1; > - PageDirectoryEntry->Bits.MustBe1 = 1; > + PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | > AddressEncMask; > + PageDirectory1GEntry->Bits.ReadWrite = 1; > + PageDirectory1GEntry->Bits.Present = 1; > + PageDirectory1GEntry->Bits.MustBe1 = 1; > } > } > - } > + } else { > + for ( IndexOfPdpEntries = 0 > + ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? > NumberOfPdpEntriesNeeded : 512) > + ; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > + // > + // Each Directory Pointer entries points to a page of Page Directory > entires. > + // So allocate space for them and fill them in in the > IndexOfPageDirectoryEntries loop. > + // > + PageDirectoryEntry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, > PageDirectoryPointerEntry++) { > - ZeroMem ( > - PageDirectoryPointerEntry, > - sizeof(PAGE_MAP_AND_DIRECTORY_POINTER) > - ); > + // > + // Fill in a Page Directory Pointer Entries > + // > + PageDirectoryPointerEntry->Uint64 = > (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > + PageDirectoryPointerEntry->Bits.ReadWrite = 1; > + PageDirectoryPointerEntry->Bits.Present = 1; > + > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < > 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += > SIZE_2MB) { > + if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > + // > + // Need to split this 2M page that covers NULL or stack range. > + // > + Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, > StackBase, StackSize); > + } else { > + // > + // Fill in the Page Directory entries > + // > + PageDirectoryEntry->Uint64 = (UINT64)PageAddress | > AddressEncMask; > + PageDirectoryEntry->Bits.ReadWrite = 1; > + PageDirectoryEntry->Bits.Present = 1; > + PageDirectoryEntry->Bits.MustBe1 = 1; > + } > + } > + } > + > + // > + // Fill with null entry for unused PDPTE > + // > + ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * > sizeof(PAGE_MAP_AND_DIRECTORY_POINTER)); > } > } > + > + // > + // For the PML4 entries we are not using fill in a null entry. > + // > + ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof > (PAGE_MAP_AND_DIRECTORY_POINTER)); > } > > - // > - // For the PML4 entries we are not using fill in a null entry. > - // > - for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, > PageMapLevel4Entry++) { > - ZeroMem ( > - PageMapLevel4Entry, > - sizeof (PAGE_MAP_AND_DIRECTORY_POINTER) > - ); > + if (Page5LevelSupport) { > + Cr4.UintN = AsmReadCr4 (); > + Cr4.Bits.LA57 = 1; > + AsmWriteCr4 (Cr4.UintN); > + // > + // For the PML5 entries we are not using fill in a null entry. > + // > + ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof > (PAGE_MAP_AND_DIRECTORY_POINTER)); > } > > // > -- > 2.21.0.windows.1 > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode 2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A @ 2019-07-23 7:48 ` Laszlo Ersek 2019-07-23 19:45 ` Singh, Brijesh 0 siblings, 1 reply; 23+ messages in thread From: Laszlo Ersek @ 2019-07-23 7:48 UTC (permalink / raw) To: Wu, Hao A, devel@edk2.groups.io, Ni, Ray; +Cc: Dong, Eric, Brijesh Singh (+ Brijesh, and one comment below) On 07/23/19 04:05, Wu, Hao A wrote: >> -----Original Message----- >> From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf Of Ni, >> Ray >> Sent: Monday, July 22, 2019 4:16 PM >> To: devel@edk2.groups.io >> Cc: Dong, Eric; Laszlo Ersek >> Subject: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level >> page table for long mode >> >> REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 >> >> DxeIpl is responsible to create page table for DXE phase running >> either in long mode or in 32bit mode with certain protection >> mechanism enabled (refer to ToBuildPageTable()). >> >> The patch updates DxeIpl to create 5-level page table for DXE phase >> running in long mode when PcdUse5LevelPageTable is TRUE and CPU >> supports 5-level page table. >> >> Signed-off-by: Ray Ni <ray.ni@intel.com> >> Cc: Eric Dong <eric.dong@intel.com> >> Cc: Laszlo Ersek <lersek@redhat.com> >> --- >> MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + >> .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ >> 2 files changed, 151 insertions(+), 77 deletions(-) >> >> diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >> b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >> index abc3217b01..98bc17fc9d 100644 >> --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >> +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >> @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] >> >> gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask >> ## CONSUMES >> gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask >> ## CONSUMES >> gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## >> CONSUMES >> + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## >> SOMETIMES_CONSUMES >> >> [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] >> gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## >> SOMETIMES_CONSUMES >> diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >> b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >> index edc38e4525..a5bcdcc734 100644 >> --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >> +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >> @@ -15,7 +15,7 @@ >> 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume >> 2:Instruction Set Reference, Intel >> 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume >> 3:System Programmer's Guide, Intel >> >> -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR> >> +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR> >> Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> >> >> SPDX-License-Identifier: BSD-2-Clause-Patent >> @@ -626,14 +626,19 @@ CreateIdentityMappingPageTables ( >> ) >> { >> UINT32 RegEax; >> + UINT32 RegEbx; >> + UINT32 RegEcx; >> UINT32 RegEdx; >> UINT8 PhysicalAddressBits; >> EFI_PHYSICAL_ADDRESS PageAddress; >> + UINTN IndexOfPml5Entries; >> UINTN IndexOfPml4Entries; >> UINTN IndexOfPdpEntries; >> UINTN IndexOfPageDirectoryEntries; >> + UINT32 NumberOfPml5EntriesNeeded; >> UINT32 NumberOfPml4EntriesNeeded; >> UINT32 NumberOfPdpEntriesNeeded; >> + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry; >> PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry; >> PAGE_MAP_AND_DIRECTORY_POINTER *PageMap; >> PAGE_MAP_AND_DIRECTORY_POINTER >> *PageDirectoryPointerEntry; >> @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables ( >> UINTN TotalPagesNum; >> UINTN BigPageAddress; >> VOID *Hob; >> + BOOLEAN Page5LevelSupport; >> BOOLEAN Page1GSupport; >> PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry; >> UINT64 AddressEncMask; >> + IA32_CR4 Cr4; >> >> // >> // Make sure AddressEncMask is contained to smallest supported address >> field >> @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables ( >> } >> } >> >> + Page5LevelSupport = FALSE; >> + if (PcdGetBool (PcdUse5LevelPageTable)) { >> + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx); >> + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, >> RegEbx, RegEcx, RegEdx)); >> + if ((RegEcx & BIT16) != 0) { >> + Page5LevelSupport = TRUE; >> + } >> + } >> + >> + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage >> = %d/%d/%d\n", PhysicalAddressBits, Page5LevelSupport, Page1GSupport)); >> + >> // >> - // IA-32e paging translates 48-bit linear addresses to 52-bit physical >> addresses. >> + // IA-32e paging translates 48-bit linear addresses to 52-bit physical >> addresses >> + // when 5-Level Paging is disabled, >> + // due to either unsupported by HW, or disabled by PCD. >> // >> ASSERT (PhysicalAddressBits <= 52); >> - if (PhysicalAddressBits > 48) { >> + if (!Page5LevelSupport && PhysicalAddressBits > 48) { >> PhysicalAddressBits = 48; >> } >> >> // >> // Calculate the table entries needed. >> // >> - if (PhysicalAddressBits <= 39 ) { >> - NumberOfPml4EntriesNeeded = 1; >> - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, >> (PhysicalAddressBits - 30)); >> - } else { >> - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, >> (PhysicalAddressBits - 39)); >> - NumberOfPdpEntriesNeeded = 512; >> + NumberOfPml5EntriesNeeded = 1; >> + if (PhysicalAddressBits > 48) { >> + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, >> PhysicalAddressBits - 48); >> + PhysicalAddressBits = 48; >> } >> >> + NumberOfPml4EntriesNeeded = 1; >> + if (PhysicalAddressBits > 39) { >> + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, >> PhysicalAddressBits - 39); >> + PhysicalAddressBits = 39; >> + } >> + >> + NumberOfPdpEntriesNeeded = 1; >> + ASSERT (PhysicalAddressBits > 30); >> + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits >> - 30); >> + >> // >> // Pre-allocate big pages to avoid later allocations. >> // >> if (!Page1GSupport) { >> - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * >> NumberOfPml4EntriesNeeded + 1; >> + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * >> NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; >> } else { >> - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; >> + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * >> NumberOfPml5EntriesNeeded + 1; >> + } >> + >> + // >> + // Substract the one page occupied by PML5 entries if 5-Level Paging is >> disabled. >> + // >> + if (!Page5LevelSupport) { >> + TotalPagesNum--; >> } >> + >> + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n", >> + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded, >> + NumberOfPdpEntriesNeeded, TotalPagesNum)); >> + >> BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum); >> ASSERT (BigPageAddress != 0); >> >> @@ -711,92 +751,125 @@ CreateIdentityMappingPageTables ( >> // By architecture only one PageMapLevel4 exists - so lets allocate storage >> for it. >> // >> PageMap = (VOID *) BigPageAddress; >> - BigPageAddress += SIZE_4KB; >> - >> - PageMapLevel4Entry = PageMap; >> - PageAddress = 0; >> - for (IndexOfPml4Entries = 0; IndexOfPml4Entries < >> NumberOfPml4EntriesNeeded; IndexOfPml4Entries++, >> PageMapLevel4Entry++) { >> + if (Page5LevelSupport) { >> // >> - // Each PML4 entry points to a page of Page Directory Pointer entires. >> - // So lets allocate space for them and fill them in in the IndexOfPdpEntries >> loop. >> + // By architecture only one PageMapLevel5 exists - so lets allocate storage >> for it. >> // >> - PageDirectoryPointerEntry = (VOID *) BigPageAddress; >> - BigPageAddress += SIZE_4KB; >> + PageMapLevel5Entry = PageMap; >> + BigPageAddress += SIZE_4KB; >> + } >> + PageAddress = 0; >> >> + for ( IndexOfPml5Entries = 0 >> + ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded >> + ; IndexOfPml5Entries++, PageMapLevel5Entry++) { >> // >> - // Make a PML4 Entry >> + // Each PML5 entry points to a page of PML4 entires. >> + // So lets allocate space for them and fill them in in the >> IndexOfPml4Entries loop. >> + // When 5-Level Paging is disabled, below allocation happens only once. >> // >> - PageMapLevel4Entry->Uint64 = >> (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; >> - PageMapLevel4Entry->Bits.ReadWrite = 1; >> - PageMapLevel4Entry->Bits.Present = 1; >> + PageMapLevel4Entry = (VOID *) BigPageAddress; >> + BigPageAddress += SIZE_4KB; >> >> - if (Page1GSupport) { >> - PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; >> + if (Page5LevelSupport) { >> + // >> + // Make a PML5 Entry >> + // >> + PageMapLevel5Entry->Uint64 = (UINT64) (UINTN) PageMapLevel4Entry; > > > Hello, > > Do we need to bitwise-or with 'AddressEncMask' here? Indeed it crossed my mind how this extension intersects with SEV. I didn't ask the question myself because I thought 5-level paging didn't apply to the AMD processors that supported SEV. But maybe we should figure that out now regardless. Thanks Laszlo > Best Regards, > Hao Wu > > >> + PageMapLevel5Entry->Bits.ReadWrite = 1; >> + PageMapLevel5Entry->Bits.Present = 1; >> + } >> >> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; >> IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += >> SIZE_1GB) { >> - if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { >> - Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, >> StackBase, StackSize); >> - } else { >> - // >> - // Fill in the Page Directory entries >> - // >> - PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | >> AddressEncMask; >> - PageDirectory1GEntry->Bits.ReadWrite = 1; >> - PageDirectory1GEntry->Bits.Present = 1; >> - PageDirectory1GEntry->Bits.MustBe1 = 1; >> - } >> - } >> - } else { >> - for (IndexOfPdpEntries = 0; IndexOfPdpEntries < >> NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, >> PageDirectoryPointerEntry++) { >> - // >> - // Each Directory Pointer entries points to a page of Page Directory >> entires. >> - // So allocate space for them and fill them in in the >> IndexOfPageDirectoryEntries loop. >> - // >> - PageDirectoryEntry = (VOID *) BigPageAddress; >> - BigPageAddress += SIZE_4KB; >> + for ( IndexOfPml4Entries = 0 >> + ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? >> NumberOfPml4EntriesNeeded : 512) >> + ; IndexOfPml4Entries++, PageMapLevel4Entry++) { >> + // >> + // Each PML4 entry points to a page of Page Directory Pointer entires. >> + // So lets allocate space for them and fill them in in the >> IndexOfPdpEntries loop. >> + // >> + PageDirectoryPointerEntry = (VOID *) BigPageAddress; >> + BigPageAddress += SIZE_4KB; >> >> - // >> - // Fill in a Page Directory Pointer Entries >> - // >> - PageDirectoryPointerEntry->Uint64 = >> (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; >> - PageDirectoryPointerEntry->Bits.ReadWrite = 1; >> - PageDirectoryPointerEntry->Bits.Present = 1; >> + // >> + // Make a PML4 Entry >> + // >> + PageMapLevel4Entry->Uint64 = >> (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; >> + PageMapLevel4Entry->Bits.ReadWrite = 1; >> + PageMapLevel4Entry->Bits.Present = 1; >> >> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < >> 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += >> SIZE_2MB) { >> - if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { >> - // >> - // Need to split this 2M page that covers NULL or stack range. >> - // >> - Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, >> StackBase, StackSize); >> + if (Page1GSupport) { >> + PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; >> + >> + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < >> 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress >> += SIZE_1GB) { >> + if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { >> + Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, >> StackBase, StackSize); >> } else { >> // >> // Fill in the Page Directory entries >> // >> - PageDirectoryEntry->Uint64 = (UINT64)PageAddress | >> AddressEncMask; >> - PageDirectoryEntry->Bits.ReadWrite = 1; >> - PageDirectoryEntry->Bits.Present = 1; >> - PageDirectoryEntry->Bits.MustBe1 = 1; >> + PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | >> AddressEncMask; >> + PageDirectory1GEntry->Bits.ReadWrite = 1; >> + PageDirectory1GEntry->Bits.Present = 1; >> + PageDirectory1GEntry->Bits.MustBe1 = 1; >> } >> } >> - } >> + } else { >> + for ( IndexOfPdpEntries = 0 >> + ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? >> NumberOfPdpEntriesNeeded : 512) >> + ; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { >> + // >> + // Each Directory Pointer entries points to a page of Page Directory >> entires. >> + // So allocate space for them and fill them in in the >> IndexOfPageDirectoryEntries loop. >> + // >> + PageDirectoryEntry = (VOID *) BigPageAddress; >> + BigPageAddress += SIZE_4KB; >> >> - for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, >> PageDirectoryPointerEntry++) { >> - ZeroMem ( >> - PageDirectoryPointerEntry, >> - sizeof(PAGE_MAP_AND_DIRECTORY_POINTER) >> - ); >> + // >> + // Fill in a Page Directory Pointer Entries >> + // >> + PageDirectoryPointerEntry->Uint64 = >> (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; >> + PageDirectoryPointerEntry->Bits.ReadWrite = 1; >> + PageDirectoryPointerEntry->Bits.Present = 1; >> + >> + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < >> 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += >> SIZE_2MB) { >> + if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { >> + // >> + // Need to split this 2M page that covers NULL or stack range. >> + // >> + Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, >> StackBase, StackSize); >> + } else { >> + // >> + // Fill in the Page Directory entries >> + // >> + PageDirectoryEntry->Uint64 = (UINT64)PageAddress | >> AddressEncMask; >> + PageDirectoryEntry->Bits.ReadWrite = 1; >> + PageDirectoryEntry->Bits.Present = 1; >> + PageDirectoryEntry->Bits.MustBe1 = 1; >> + } >> + } >> + } >> + >> + // >> + // Fill with null entry for unused PDPTE >> + // >> + ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * >> sizeof(PAGE_MAP_AND_DIRECTORY_POINTER)); >> } >> } >> + >> + // >> + // For the PML4 entries we are not using fill in a null entry. >> + // >> + ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof >> (PAGE_MAP_AND_DIRECTORY_POINTER)); >> } >> >> - // >> - // For the PML4 entries we are not using fill in a null entry. >> - // >> - for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, >> PageMapLevel4Entry++) { >> - ZeroMem ( >> - PageMapLevel4Entry, >> - sizeof (PAGE_MAP_AND_DIRECTORY_POINTER) >> - ); >> + if (Page5LevelSupport) { >> + Cr4.UintN = AsmReadCr4 (); >> + Cr4.Bits.LA57 = 1; >> + AsmWriteCr4 (Cr4.UintN); >> + // >> + // For the PML5 entries we are not using fill in a null entry. >> + // >> + ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof >> (PAGE_MAP_AND_DIRECTORY_POINTER)); >> } >> >> // >> -- >> 2.21.0.windows.1 >> >> >> > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode 2019-07-23 7:48 ` Laszlo Ersek @ 2019-07-23 19:45 ` Singh, Brijesh 0 siblings, 0 replies; 23+ messages in thread From: Singh, Brijesh @ 2019-07-23 19:45 UTC (permalink / raw) To: Laszlo Ersek, Wu, Hao A, devel@edk2.groups.io, Ni, Ray Cc: Singh, Brijesh, Dong, Eric On 7/23/19 2:48 AM, Laszlo Ersek wrote: > (+ Brijesh, and one comment below) > > On 07/23/19 04:05, Wu, Hao A wrote: >>> -----Original Message----- >>> From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf Of Ni, >>> Ray >>> Sent: Monday, July 22, 2019 4:16 PM >>> To: devel@edk2.groups.io >>> Cc: Dong, Eric; Laszlo Ersek >>> Subject: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level >>> page table for long mode >>> >>> REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 >>> >>> DxeIpl is responsible to create page table for DXE phase running >>> either in long mode or in 32bit mode with certain protection >>> mechanism enabled (refer to ToBuildPageTable()). >>> >>> The patch updates DxeIpl to create 5-level page table for DXE phase >>> running in long mode when PcdUse5LevelPageTable is TRUE and CPU >>> supports 5-level page table. >>> >>> Signed-off-by: Ray Ni <ray.ni@intel.com> >>> Cc: Eric Dong <eric.dong@intel.com> >>> Cc: Laszlo Ersek <lersek@redhat.com> >>> --- >>> MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + >>> .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ >>> 2 files changed, 151 insertions(+), 77 deletions(-) >>> >>> diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >>> b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >>> index abc3217b01..98bc17fc9d 100644 >>> --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >>> +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >>> @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] >>> >>> gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask >>> ## CONSUMES >>> gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask >>> ## CONSUMES >>> gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## >>> CONSUMES >>> + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## >>> SOMETIMES_CONSUMES >>> >>> [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] >>> gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## >>> SOMETIMES_CONSUMES >>> diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >>> b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >>> index edc38e4525..a5bcdcc734 100644 >>> --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >>> +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >>> @@ -15,7 +15,7 @@ >>> 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume >>> 2:Instruction Set Reference, Intel >>> 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume >>> 3:System Programmer's Guide, Intel >>> >>> -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR> >>> +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR> >>> Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> >>> >>> SPDX-License-Identifier: BSD-2-Clause-Patent >>> @@ -626,14 +626,19 @@ CreateIdentityMappingPageTables ( >>> ) >>> { >>> UINT32 RegEax; >>> + UINT32 RegEbx; >>> + UINT32 RegEcx; >>> UINT32 RegEdx; >>> UINT8 PhysicalAddressBits; >>> EFI_PHYSICAL_ADDRESS PageAddress; >>> + UINTN IndexOfPml5Entries; >>> UINTN IndexOfPml4Entries; >>> UINTN IndexOfPdpEntries; >>> UINTN IndexOfPageDirectoryEntries; >>> + UINT32 NumberOfPml5EntriesNeeded; >>> UINT32 NumberOfPml4EntriesNeeded; >>> UINT32 NumberOfPdpEntriesNeeded; >>> + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry; >>> PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry; >>> PAGE_MAP_AND_DIRECTORY_POINTER *PageMap; >>> PAGE_MAP_AND_DIRECTORY_POINTER >>> *PageDirectoryPointerEntry; >>> @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables ( >>> UINTN TotalPagesNum; >>> UINTN BigPageAddress; >>> VOID *Hob; >>> + BOOLEAN Page5LevelSupport; >>> BOOLEAN Page1GSupport; >>> PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry; >>> UINT64 AddressEncMask; >>> + IA32_CR4 Cr4; >>> >>> // >>> // Make sure AddressEncMask is contained to smallest supported address >>> field >>> @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables ( >>> } >>> } >>> >>> + Page5LevelSupport = FALSE; >>> + if (PcdGetBool (PcdUse5LevelPageTable)) { >>> + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx); >>> + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, >>> RegEbx, RegEcx, RegEdx)); >>> + if ((RegEcx & BIT16) != 0) { >>> + Page5LevelSupport = TRUE; >>> + } >>> + } >>> + >>> + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage >>> = %d/%d/%d\n", PhysicalAddressBits, Page5LevelSupport, Page1GSupport)); >>> + >>> // >>> - // IA-32e paging translates 48-bit linear addresses to 52-bit physical >>> addresses. >>> + // IA-32e paging translates 48-bit linear addresses to 52-bit physical >>> addresses >>> + // when 5-Level Paging is disabled, >>> + // due to either unsupported by HW, or disabled by PCD. >>> // >>> ASSERT (PhysicalAddressBits <= 52); >>> - if (PhysicalAddressBits > 48) { >>> + if (!Page5LevelSupport && PhysicalAddressBits > 48) { >>> PhysicalAddressBits = 48; >>> } >>> >>> // >>> // Calculate the table entries needed. >>> // >>> - if (PhysicalAddressBits <= 39 ) { >>> - NumberOfPml4EntriesNeeded = 1; >>> - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, >>> (PhysicalAddressBits - 30)); >>> - } else { >>> - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, >>> (PhysicalAddressBits - 39)); >>> - NumberOfPdpEntriesNeeded = 512; >>> + NumberOfPml5EntriesNeeded = 1; >>> + if (PhysicalAddressBits > 48) { >>> + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, >>> PhysicalAddressBits - 48); >>> + PhysicalAddressBits = 48; >>> } >>> >>> + NumberOfPml4EntriesNeeded = 1; >>> + if (PhysicalAddressBits > 39) { >>> + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, >>> PhysicalAddressBits - 39); >>> + PhysicalAddressBits = 39; >>> + } >>> + >>> + NumberOfPdpEntriesNeeded = 1; >>> + ASSERT (PhysicalAddressBits > 30); >>> + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits >>> - 30); >>> + >>> // >>> // Pre-allocate big pages to avoid later allocations. >>> // >>> if (!Page1GSupport) { >>> - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * >>> NumberOfPml4EntriesNeeded + 1; >>> + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * >>> NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; >>> } else { >>> - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; >>> + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * >>> NumberOfPml5EntriesNeeded + 1; >>> + } >>> + >>> + // >>> + // Substract the one page occupied by PML5 entries if 5-Level Paging is >>> disabled. >>> + // >>> + if (!Page5LevelSupport) { >>> + TotalPagesNum--; >>> } >>> + >>> + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n", >>> + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded, >>> + NumberOfPdpEntriesNeeded, TotalPagesNum)); >>> + >>> BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum); >>> ASSERT (BigPageAddress != 0); >>> >>> @@ -711,92 +751,125 @@ CreateIdentityMappingPageTables ( >>> // By architecture only one PageMapLevel4 exists - so lets allocate storage >>> for it. >>> // >>> PageMap = (VOID *) BigPageAddress; >>> - BigPageAddress += SIZE_4KB; >>> - >>> - PageMapLevel4Entry = PageMap; >>> - PageAddress = 0; >>> - for (IndexOfPml4Entries = 0; IndexOfPml4Entries < >>> NumberOfPml4EntriesNeeded; IndexOfPml4Entries++, >>> PageMapLevel4Entry++) { >>> + if (Page5LevelSupport) { >>> // >>> - // Each PML4 entry points to a page of Page Directory Pointer entires. >>> - // So lets allocate space for them and fill them in in the IndexOfPdpEntries >>> loop. >>> + // By architecture only one PageMapLevel5 exists - so lets allocate storage >>> for it. >>> // >>> - PageDirectoryPointerEntry = (VOID *) BigPageAddress; >>> - BigPageAddress += SIZE_4KB; >>> + PageMapLevel5Entry = PageMap; >>> + BigPageAddress += SIZE_4KB; >>> + } >>> + PageAddress = 0; >>> >>> + for ( IndexOfPml5Entries = 0 >>> + ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded >>> + ; IndexOfPml5Entries++, PageMapLevel5Entry++) { >>> // >>> - // Make a PML4 Entry >>> + // Each PML5 entry points to a page of PML4 entires. >>> + // So lets allocate space for them and fill them in in the >>> IndexOfPml4Entries loop. >>> + // When 5-Level Paging is disabled, below allocation happens only once. >>> // >>> - PageMapLevel4Entry->Uint64 = >>> (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; >>> - PageMapLevel4Entry->Bits.ReadWrite = 1; >>> - PageMapLevel4Entry->Bits.Present = 1; >>> + PageMapLevel4Entry = (VOID *) BigPageAddress; >>> + BigPageAddress += SIZE_4KB; >>> >>> - if (Page1GSupport) { >>> - PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; >>> + if (Page5LevelSupport) { >>> + // >>> + // Make a PML5 Entry >>> + // >>> + PageMapLevel5Entry->Uint64 = (UINT64) (UINTN) PageMapLevel4Entry; >> >> >> Hello, >> >> Do we need to bitwise-or with 'AddressEncMask' here? > > Indeed it crossed my mind how this extension intersects with SEV. > > I didn't ask the question myself because I thought 5-level paging didn't > apply to the AMD processors that supported SEV. But maybe we should > figure that out now regardless. > We should apply the PAGE_ENC mask while creating the 5-level pgtable entries (similar to what we do today). The current available processor which support SEV does not support 5-level pgtable but I expect things will change in future generation. -Brijesh > Thanks > Laszlo > >> Best Regards, >> Hao Wu >> >> >>> + PageMapLevel5Entry->Bits.ReadWrite = 1; >>> + PageMapLevel5Entry->Bits.Present = 1; >>> + } >>> >>> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; >>> IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += >>> SIZE_1GB) { >>> - if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { >>> - Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, >>> StackBase, StackSize); >>> - } else { >>> - // >>> - // Fill in the Page Directory entries >>> - // >>> - PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | >>> AddressEncMask; >>> - PageDirectory1GEntry->Bits.ReadWrite = 1; >>> - PageDirectory1GEntry->Bits.Present = 1; >>> - PageDirectory1GEntry->Bits.MustBe1 = 1; >>> - } >>> - } >>> - } else { >>> - for (IndexOfPdpEntries = 0; IndexOfPdpEntries < >>> NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, >>> PageDirectoryPointerEntry++) { >>> - // >>> - // Each Directory Pointer entries points to a page of Page Directory >>> entires. >>> - // So allocate space for them and fill them in in the >>> IndexOfPageDirectoryEntries loop. >>> - // >>> - PageDirectoryEntry = (VOID *) BigPageAddress; >>> - BigPageAddress += SIZE_4KB; >>> + for ( IndexOfPml4Entries = 0 >>> + ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? >>> NumberOfPml4EntriesNeeded : 512) >>> + ; IndexOfPml4Entries++, PageMapLevel4Entry++) { >>> + // >>> + // Each PML4 entry points to a page of Page Directory Pointer entires. >>> + // So lets allocate space for them and fill them in in the >>> IndexOfPdpEntries loop. >>> + // >>> + PageDirectoryPointerEntry = (VOID *) BigPageAddress; >>> + BigPageAddress += SIZE_4KB; >>> >>> - // >>> - // Fill in a Page Directory Pointer Entries >>> - // >>> - PageDirectoryPointerEntry->Uint64 = >>> (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; >>> - PageDirectoryPointerEntry->Bits.ReadWrite = 1; >>> - PageDirectoryPointerEntry->Bits.Present = 1; >>> + // >>> + // Make a PML4 Entry >>> + // >>> + PageMapLevel4Entry->Uint64 = >>> (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; >>> + PageMapLevel4Entry->Bits.ReadWrite = 1; >>> + PageMapLevel4Entry->Bits.Present = 1; >>> >>> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < >>> 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += >>> SIZE_2MB) { >>> - if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { >>> - // >>> - // Need to split this 2M page that covers NULL or stack range. >>> - // >>> - Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, >>> StackBase, StackSize); >>> + if (Page1GSupport) { >>> + PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; >>> + >>> + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < >>> 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress >>> += SIZE_1GB) { >>> + if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { >>> + Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, >>> StackBase, StackSize); >>> } else { >>> // >>> // Fill in the Page Directory entries >>> // >>> - PageDirectoryEntry->Uint64 = (UINT64)PageAddress | >>> AddressEncMask; >>> - PageDirectoryEntry->Bits.ReadWrite = 1; >>> - PageDirectoryEntry->Bits.Present = 1; >>> - PageDirectoryEntry->Bits.MustBe1 = 1; >>> + PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | >>> AddressEncMask; >>> + PageDirectory1GEntry->Bits.ReadWrite = 1; >>> + PageDirectory1GEntry->Bits.Present = 1; >>> + PageDirectory1GEntry->Bits.MustBe1 = 1; >>> } >>> } >>> - } >>> + } else { >>> + for ( IndexOfPdpEntries = 0 >>> + ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? >>> NumberOfPdpEntriesNeeded : 512) >>> + ; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { >>> + // >>> + // Each Directory Pointer entries points to a page of Page Directory >>> entires. >>> + // So allocate space for them and fill them in in the >>> IndexOfPageDirectoryEntries loop. >>> + // >>> + PageDirectoryEntry = (VOID *) BigPageAddress; >>> + BigPageAddress += SIZE_4KB; >>> >>> - for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, >>> PageDirectoryPointerEntry++) { >>> - ZeroMem ( >>> - PageDirectoryPointerEntry, >>> - sizeof(PAGE_MAP_AND_DIRECTORY_POINTER) >>> - ); >>> + // >>> + // Fill in a Page Directory Pointer Entries >>> + // >>> + PageDirectoryPointerEntry->Uint64 = >>> (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; >>> + PageDirectoryPointerEntry->Bits.ReadWrite = 1; >>> + PageDirectoryPointerEntry->Bits.Present = 1; >>> + >>> + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < >>> 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += >>> SIZE_2MB) { >>> + if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { >>> + // >>> + // Need to split this 2M page that covers NULL or stack range. >>> + // >>> + Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, >>> StackBase, StackSize); >>> + } else { >>> + // >>> + // Fill in the Page Directory entries >>> + // >>> + PageDirectoryEntry->Uint64 = (UINT64)PageAddress | >>> AddressEncMask; >>> + PageDirectoryEntry->Bits.ReadWrite = 1; >>> + PageDirectoryEntry->Bits.Present = 1; >>> + PageDirectoryEntry->Bits.MustBe1 = 1; >>> + } >>> + } >>> + } >>> + >>> + // >>> + // Fill with null entry for unused PDPTE >>> + // >>> + ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * >>> sizeof(PAGE_MAP_AND_DIRECTORY_POINTER)); >>> } >>> } >>> + >>> + // >>> + // For the PML4 entries we are not using fill in a null entry. >>> + // >>> + ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof >>> (PAGE_MAP_AND_DIRECTORY_POINTER)); >>> } >>> >>> - // >>> - // For the PML4 entries we are not using fill in a null entry. >>> - // >>> - for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, >>> PageMapLevel4Entry++) { >>> - ZeroMem ( >>> - PageMapLevel4Entry, >>> - sizeof (PAGE_MAP_AND_DIRECTORY_POINTER) >>> - ); >>> + if (Page5LevelSupport) { >>> + Cr4.UintN = AsmReadCr4 (); >>> + Cr4.Bits.LA57 = 1; >>> + AsmWriteCr4 (Cr4.UintN); >>> + // >>> + // For the PML5 entries we are not using fill in a null entry. >>> + // >>> + ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof >>> (PAGE_MAP_AND_DIRECTORY_POINTER)); >>> } >>> >>> // >>> -- >>> 2.21.0.windows.1 >>> >>> >>> >> > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode 2019-07-22 8:15 ` [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode Ni, Ray 2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A @ 2019-07-23 9:46 ` Laszlo Ersek 2019-07-23 15:29 ` Ni, Ray 1 sibling, 1 reply; 23+ messages in thread From: Laszlo Ersek @ 2019-07-23 9:46 UTC (permalink / raw) To: devel, ray.ni; +Cc: Eric Dong On 07/22/19 10:15, Ni, Ray wrote: > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > DxeIpl is responsible to create page table for DXE phase running > either in long mode or in 32bit mode with certain protection > mechanism enabled (refer to ToBuildPageTable()). > > The patch updates DxeIpl to create 5-level page table for DXE phase > running in long mode when PcdUse5LevelPageTable is TRUE and CPU > supports 5-level page table. > > Signed-off-by: Ray Ni <ray.ni@intel.com> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Laszlo Ersek <lersek@redhat.com> > --- > MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + > .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ > 2 files changed, 151 insertions(+), 77 deletions(-) > > diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > index abc3217b01..98bc17fc9d 100644 > --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] > gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask ## CONSUMES > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask ## CONSUMES > gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## CONSUMES > + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## SOMETIMES_CONSUMES > > [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] > gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## SOMETIMES_CONSUMES > diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > index edc38e4525..a5bcdcc734 100644 > --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > @@ -15,7 +15,7 @@ > 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume 2:Instruction Set Reference, Intel > 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume 3:System Programmer's Guide, Intel > > -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR> > +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR> > Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> > > SPDX-License-Identifier: BSD-2-Clause-Patent > @@ -626,14 +626,19 @@ CreateIdentityMappingPageTables ( > ) > { > UINT32 RegEax; > + UINT32 RegEbx; > + UINT32 RegEcx; > UINT32 RegEdx; > UINT8 PhysicalAddressBits; > EFI_PHYSICAL_ADDRESS PageAddress; > + UINTN IndexOfPml5Entries; > UINTN IndexOfPml4Entries; > UINTN IndexOfPdpEntries; > UINTN IndexOfPageDirectoryEntries; > + UINT32 NumberOfPml5EntriesNeeded; > UINT32 NumberOfPml4EntriesNeeded; > UINT32 NumberOfPdpEntriesNeeded; > + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry; > PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry; > PAGE_MAP_AND_DIRECTORY_POINTER *PageMap; > PAGE_MAP_AND_DIRECTORY_POINTER *PageDirectoryPointerEntry; > @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables ( > UINTN TotalPagesNum; > UINTN BigPageAddress; > VOID *Hob; > + BOOLEAN Page5LevelSupport; > BOOLEAN Page1GSupport; > PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry; > UINT64 AddressEncMask; > + IA32_CR4 Cr4; > > // > // Make sure AddressEncMask is contained to smallest supported address field > @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables ( > } > } > > + Page5LevelSupport = FALSE; > + if (PcdGetBool (PcdUse5LevelPageTable)) { > + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx); > + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, RegEbx, RegEcx, RegEdx)); > + if ((RegEcx & BIT16) != 0) { (1) Would it be possible to use macro names here, for Index and SubIndex in AsmCpuidEx(), and a "better" macro name than BIT16, in the RegEcx check? > + Page5LevelSupport = TRUE; > + } > + } > + > + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage = %d/%d/%d\n", PhysicalAddressBits, Page5LevelSupport, Page1GSupport)); > + (2) Can we format this log message as: AddressBits=%d 5LevelPaging=%d 1GPage=%d ? > // > - // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses. > + // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses > + // when 5-Level Paging is disabled, > + // due to either unsupported by HW, or disabled by PCD. > // > ASSERT (PhysicalAddressBits <= 52); > - if (PhysicalAddressBits > 48) { > + if (!Page5LevelSupport && PhysicalAddressBits > 48) { > PhysicalAddressBits = 48; > } > > // > // Calculate the table entries needed. > // > - if (PhysicalAddressBits <= 39 ) { > - NumberOfPml4EntriesNeeded = 1; > - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 30)); > - } else { > - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 39)); > - NumberOfPdpEntriesNeeded = 512; > + NumberOfPml5EntriesNeeded = 1; > + if (PhysicalAddressBits > 48) { > + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 48); > + PhysicalAddressBits = 48; > } > > + NumberOfPml4EntriesNeeded = 1; > + if (PhysicalAddressBits > 39) { > + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 39); > + PhysicalAddressBits = 39; > + } > + > + NumberOfPdpEntriesNeeded = 1; > + ASSERT (PhysicalAddressBits > 30); > + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 30); > + > // > // Pre-allocate big pages to avoid later allocations. > // > if (!Page1GSupport) { > - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1; > + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; > } else { > - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; > + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; > + } > + > + // > + // Substract the one page occupied by PML5 entries if 5-Level Paging is disabled. > + // > + if (!Page5LevelSupport) { > + TotalPagesNum--; > } > + > + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n", > + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded, > + NumberOfPdpEntriesNeeded, TotalPagesNum)); > + (3) Same comment about log message formatting as in (2). (4) Please log UINT32 values with %u, not %d. (5) Please log UINTN values with %Lu (and cast them to UINT64 first). (6) More generally, can we please replace this patch with three patches, in the series? - the first patch should extract the TotalPagesNum calculation to a separate *library* function (it should be a public function in a BASE or PEIM library) - the second patch should extend the TotalPagesNum calculation in the library function to 5-level paging - the third patch should be the remaining code from the present patch. Here's why: in OvmfPkg/PlatformPei, the GetPeiMemoryCap() function duplicates this calculation. In OVMF, PEI-phase memory allocations can be dominated by the page tables built for 64-bit DXE, and so OvmfPkg/PlatformPei must know, for sizing the permanent PEI RAM, how much RAM the DXE IPL PEIM will need for those page tables. I would *really* dislike copying this update (for 5-level paging) from CreateIdentityMappingPageTables() to GetPeiMemoryCap(). Instead, the calculation should be extracted to a library function, and then OVMF (and all other platforms affected similarly) could call the new function. If this is not feasible or very much out of scope, then I guess we'll just keep the PCD as FALSE in OVMF, for the time being. I'll let others review the rest of this patch. Thanks Laszlo > BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum); > ASSERT (BigPageAddress != 0); > > @@ -711,92 +751,125 @@ CreateIdentityMappingPageTables ( > // By architecture only one PageMapLevel4 exists - so lets allocate storage for it. > // > PageMap = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > - > - PageMapLevel4Entry = PageMap; > - PageAddress = 0; > - for (IndexOfPml4Entries = 0; IndexOfPml4Entries < NumberOfPml4EntriesNeeded; IndexOfPml4Entries++, PageMapLevel4Entry++) { > + if (Page5LevelSupport) { > // > - // Each PML4 entry points to a page of Page Directory Pointer entires. > - // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop. > + // By architecture only one PageMapLevel5 exists - so lets allocate storage for it. > // > - PageDirectoryPointerEntry = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > + PageMapLevel5Entry = PageMap; > + BigPageAddress += SIZE_4KB; > + } > + PageAddress = 0; > > + for ( IndexOfPml5Entries = 0 > + ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded > + ; IndexOfPml5Entries++, PageMapLevel5Entry++) { > // > - // Make a PML4 Entry > + // Each PML5 entry points to a page of PML4 entires. > + // So lets allocate space for them and fill them in in the IndexOfPml4Entries loop. > + // When 5-Level Paging is disabled, below allocation happens only once. > // > - PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > - PageMapLevel4Entry->Bits.ReadWrite = 1; > - PageMapLevel4Entry->Bits.Present = 1; > + PageMapLevel4Entry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - if (Page1GSupport) { > - PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > + if (Page5LevelSupport) { > + // > + // Make a PML5 Entry > + // > + PageMapLevel5Entry->Uint64 = (UINT64) (UINTN) PageMapLevel4Entry; > + PageMapLevel5Entry->Bits.ReadWrite = 1; > + PageMapLevel5Entry->Bits.Present = 1; > + } > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += SIZE_1GB) { > - if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > - Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize); > - } else { > - // > - // Fill in the Page Directory entries > - // > - PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > - PageDirectory1GEntry->Bits.ReadWrite = 1; > - PageDirectory1GEntry->Bits.Present = 1; > - PageDirectory1GEntry->Bits.MustBe1 = 1; > - } > - } > - } else { > - for (IndexOfPdpEntries = 0; IndexOfPdpEntries < NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > - // > - // Each Directory Pointer entries points to a page of Page Directory entires. > - // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop. > - // > - PageDirectoryEntry = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > + for ( IndexOfPml4Entries = 0 > + ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512) > + ; IndexOfPml4Entries++, PageMapLevel4Entry++) { > + // > + // Each PML4 entry points to a page of Page Directory Pointer entires. > + // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop. > + // > + PageDirectoryPointerEntry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - // > - // Fill in a Page Directory Pointer Entries > - // > - PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > - PageDirectoryPointerEntry->Bits.ReadWrite = 1; > - PageDirectoryPointerEntry->Bits.Present = 1; > + // > + // Make a PML4 Entry > + // > + PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > + PageMapLevel4Entry->Bits.ReadWrite = 1; > + PageMapLevel4Entry->Bits.Present = 1; > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) { > - if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > - // > - // Need to split this 2M page that covers NULL or stack range. > - // > - Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize); > + if (Page1GSupport) { > + PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > + > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += SIZE_1GB) { > + if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > + Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize); > } else { > // > // Fill in the Page Directory entries > // > - PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > - PageDirectoryEntry->Bits.ReadWrite = 1; > - PageDirectoryEntry->Bits.Present = 1; > - PageDirectoryEntry->Bits.MustBe1 = 1; > + PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > + PageDirectory1GEntry->Bits.ReadWrite = 1; > + PageDirectory1GEntry->Bits.Present = 1; > + PageDirectory1GEntry->Bits.MustBe1 = 1; > } > } > - } > + } else { > + for ( IndexOfPdpEntries = 0 > + ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512) > + ; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > + // > + // Each Directory Pointer entries points to a page of Page Directory entires. > + // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop. > + // > + PageDirectoryEntry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > - ZeroMem ( > - PageDirectoryPointerEntry, > - sizeof(PAGE_MAP_AND_DIRECTORY_POINTER) > - ); > + // > + // Fill in a Page Directory Pointer Entries > + // > + PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > + PageDirectoryPointerEntry->Bits.ReadWrite = 1; > + PageDirectoryPointerEntry->Bits.Present = 1; > + > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) { > + if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > + // > + // Need to split this 2M page that covers NULL or stack range. > + // > + Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize); > + } else { > + // > + // Fill in the Page Directory entries > + // > + PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > + PageDirectoryEntry->Bits.ReadWrite = 1; > + PageDirectoryEntry->Bits.Present = 1; > + PageDirectoryEntry->Bits.MustBe1 = 1; > + } > + } > + } > + > + // > + // Fill with null entry for unused PDPTE > + // > + ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * sizeof(PAGE_MAP_AND_DIRECTORY_POINTER)); > } > } > + > + // > + // For the PML4 entries we are not using fill in a null entry. > + // > + ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)); > } > > - // > - // For the PML4 entries we are not using fill in a null entry. > - // > - for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, PageMapLevel4Entry++) { > - ZeroMem ( > - PageMapLevel4Entry, > - sizeof (PAGE_MAP_AND_DIRECTORY_POINTER) > - ); > + if (Page5LevelSupport) { > + Cr4.UintN = AsmReadCr4 (); > + Cr4.Bits.LA57 = 1; > + AsmWriteCr4 (Cr4.UintN); > + // > + // For the PML5 entries we are not using fill in a null entry. > + // > + ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)); > } > > // > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode 2019-07-23 9:46 ` Laszlo Ersek @ 2019-07-23 15:29 ` Ni, Ray 2019-07-23 19:20 ` Laszlo Ersek 0 siblings, 1 reply; 23+ messages in thread From: Ni, Ray @ 2019-07-23 15:29 UTC (permalink / raw) To: devel@edk2.groups.io, lersek@redhat.com; +Cc: Dong, Eric > -----Original Message----- > From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Laszlo Ersek > Sent: Tuesday, July 23, 2019 5:46 PM > To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com> > Cc: Dong, Eric <eric.dong@intel.com> > Subject: Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode > > On 07/22/19 10:15, Ni, Ray wrote: > > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > > > DxeIpl is responsible to create page table for DXE phase running > > either in long mode or in 32bit mode with certain protection > > mechanism enabled (refer to ToBuildPageTable()). > > > > The patch updates DxeIpl to create 5-level page table for DXE phase > > running in long mode when PcdUse5LevelPageTable is TRUE and CPU > > supports 5-level page table. > > > > Signed-off-by: Ray Ni <ray.ni@intel.com> > > Cc: Eric Dong <eric.dong@intel.com> > > Cc: Laszlo Ersek <lersek@redhat.com> > > --- > > MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + > > .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ > > 2 files changed, 151 insertions(+), 77 deletions(-) > > > > diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > > index abc3217b01..98bc17fc9d 100644 > > --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > > +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > > @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] > > gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask ## CONSUMES > > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask ## CONSUMES > > gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## CONSUMES > > + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## SOMETIMES_CONSUMES > > > > [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] > > gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## SOMETIMES_CONSUMES > > diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > > index edc38e4525..a5bcdcc734 100644 > > --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > > +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > > @@ -15,7 +15,7 @@ > > 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume 2:Instruction Set Reference, Intel > > 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume 3:System Programmer's Guide, Intel > > > > -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR> > > +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR> > > Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> > > > > SPDX-License-Identifier: BSD-2-Clause-Patent > > @@ -626,14 +626,19 @@ CreateIdentityMappingPageTables ( > > ) > > { > > UINT32 RegEax; > > + UINT32 RegEbx; > > + UINT32 RegEcx; > > UINT32 RegEdx; > > UINT8 PhysicalAddressBits; > > EFI_PHYSICAL_ADDRESS PageAddress; > > + UINTN IndexOfPml5Entries; > > UINTN IndexOfPml4Entries; > > UINTN IndexOfPdpEntries; > > UINTN IndexOfPageDirectoryEntries; > > + UINT32 NumberOfPml5EntriesNeeded; > > UINT32 NumberOfPml4EntriesNeeded; > > UINT32 NumberOfPdpEntriesNeeded; > > + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry; > > PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry; > > PAGE_MAP_AND_DIRECTORY_POINTER *PageMap; > > PAGE_MAP_AND_DIRECTORY_POINTER *PageDirectoryPointerEntry; > > @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables ( > > UINTN TotalPagesNum; > > UINTN BigPageAddress; > > VOID *Hob; > > + BOOLEAN Page5LevelSupport; > > BOOLEAN Page1GSupport; > > PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry; > > UINT64 AddressEncMask; > > + IA32_CR4 Cr4; > > > > // > > // Make sure AddressEncMask is contained to smallest supported address field > > @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables ( > > } > > } > > > > + Page5LevelSupport = FALSE; > > + if (PcdGetBool (PcdUse5LevelPageTable)) { > > + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx); > > + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, RegEbx, RegEcx, RegEdx)); > > + if ((RegEcx & BIT16) != 0) { > > (1) Would it be possible to use macro names here, for Index and SubIndex > in AsmCpuidEx(), and a "better" macro name than BIT16, in the RegEcx check? There are macros which are defined in UefiCpuPkg for the CPUID leaf functions and structures for the CPUID output data. But since there is rule that MdeModulePkg cannot depend on UefiCpuPkg so the code cannot use those macros/structures. I am thinking of a new rule that UefiCpuPkg and MdeModulePkg can depend on each other. I haven't talked with Mike on this. Not sure if he is ok on this. For this case, yes I can temporary define 2 macros: one for 0x7, the other for BIT16. > > > + Page5LevelSupport = TRUE; > > + } > > + } > > + > > + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage = %d/%d/%d\n", PhysicalAddressBits, Page5LevelSupport, > Page1GSupport)); > > + > > (2) Can we format this log message as: > > AddressBits=%d 5LevelPaging=%d 1GPage=%d ok. > > ? > > > // > > - // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses. > > + // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses > > + // when 5-Level Paging is disabled, > > + // due to either unsupported by HW, or disabled by PCD. > > // > > ASSERT (PhysicalAddressBits <= 52); > > - if (PhysicalAddressBits > 48) { > > + if (!Page5LevelSupport && PhysicalAddressBits > 48) { > > PhysicalAddressBits = 48; > > } > > > > // > > // Calculate the table entries needed. > > // > > - if (PhysicalAddressBits <= 39 ) { > > - NumberOfPml4EntriesNeeded = 1; > > - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 30)); > > - } else { > > - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 39)); > > - NumberOfPdpEntriesNeeded = 512; > > + NumberOfPml5EntriesNeeded = 1; > > + if (PhysicalAddressBits > 48) { > > + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 48); > > + PhysicalAddressBits = 48; > > } > > > > + NumberOfPml4EntriesNeeded = 1; > > + if (PhysicalAddressBits > 39) { > > + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 39); > > + PhysicalAddressBits = 39; > > + } > > + > > + NumberOfPdpEntriesNeeded = 1; > > + ASSERT (PhysicalAddressBits > 30); > > + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 30); > > + > > // > > // Pre-allocate big pages to avoid later allocations. > > // > > if (!Page1GSupport) { > > - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1; > > + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded > + 1; > > } else { > > - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; > > + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; > > + } > > + > > + // > > + // Substract the one page occupied by PML5 entries if 5-Level Paging is disabled. > > + // > > + if (!Page5LevelSupport) { > > + TotalPagesNum--; > > } > > + > > + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n", > > + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded, > > + NumberOfPdpEntriesNeeded, TotalPagesNum)); > > + > > (3) Same comment about log message formatting as in (2). > > (4) Please log UINT32 values with %u, not %d. > > (5) Please log UINTN values with %Lu (and cast them to UINT64 first). ok to above 3, 4, 5. > > (6) More generally, can we please replace this patch with three patches, > in the series? > > - the first patch should extract the TotalPagesNum calculation to a > separate *library* function (it should be a public function in a BASE or > PEIM library) > > - the second patch should extend the TotalPagesNum calculation in the > library function to 5-level paging > > - the third patch should be the remaining code from the present patch. > > Here's why: in OvmfPkg/PlatformPei, the GetPeiMemoryCap() function > duplicates this calculation. In OVMF, PEI-phase memory allocations can > be dominated by the page tables built for 64-bit DXE, and so > OvmfPkg/PlatformPei must know, for sizing the permanent PEI RAM, how > much RAM the DXE IPL PEIM will need for those page tables. > > I would *really* dislike copying this update (for 5-level paging) from > CreateIdentityMappingPageTables() to GetPeiMemoryCap(). Instead, the > calculation should be extracted to a library function, and then OVMF > (and all other platforms affected similarly) could call the new function. > > If this is not feasible or very much out of scope, then I guess we'll > just keep the PCD as FALSE in OVMF, for the time being. > BZ https://bugzilla.tianocore.org/show_bug.cgi?id=847 may be what you need. Extracting a library API to return how many pages are needed for page tables needs to consider several cases: 1. 4K or 2M or 1G page size is used. 2. range of physical memory 3. page table level But right now, this is not in the high priority in my to-do list. I can help to change GetPeiMemoryCap() for short-term needs. Are you ok with this? > I'll let others review the rest of this patch. > > Thanks > Laszlo > > > BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum); > > ASSERT (BigPageAddress != 0); > > > > @@ -711,92 +751,125 @@ CreateIdentityMappingPageTables ( > > // By architecture only one PageMapLevel4 exists - so lets allocate storage for it. > > // > > PageMap = (VOID *) BigPageAddress; > > - BigPageAddress += SIZE_4KB; > > - > > - PageMapLevel4Entry = PageMap; > > - PageAddress = 0; > > - for (IndexOfPml4Entries = 0; IndexOfPml4Entries < NumberOfPml4EntriesNeeded; IndexOfPml4Entries++, > PageMapLevel4Entry++) { > > + if (Page5LevelSupport) { > > // > > - // Each PML4 entry points to a page of Page Directory Pointer entires. > > - // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop. > > + // By architecture only one PageMapLevel5 exists - so lets allocate storage for it. > > // > > - PageDirectoryPointerEntry = (VOID *) BigPageAddress; > > - BigPageAddress += SIZE_4KB; > > + PageMapLevel5Entry = PageMap; > > + BigPageAddress += SIZE_4KB; > > + } > > + PageAddress = 0; > > > > + for ( IndexOfPml5Entries = 0 > > + ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded > > + ; IndexOfPml5Entries++, PageMapLevel5Entry++) { > > // > > - // Make a PML4 Entry > > + // Each PML5 entry points to a page of PML4 entires. > > + // So lets allocate space for them and fill them in in the IndexOfPml4Entries loop. > > + // When 5-Level Paging is disabled, below allocation happens only once. > > // > > - PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > > - PageMapLevel4Entry->Bits.ReadWrite = 1; > > - PageMapLevel4Entry->Bits.Present = 1; > > + PageMapLevel4Entry = (VOID *) BigPageAddress; > > + BigPageAddress += SIZE_4KB; > > > > - if (Page1GSupport) { > > - PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > > + if (Page5LevelSupport) { > > + // > > + // Make a PML5 Entry > > + // > > + PageMapLevel5Entry->Uint64 = (UINT64) (UINTN) PageMapLevel4Entry; > > + PageMapLevel5Entry->Bits.ReadWrite = 1; > > + PageMapLevel5Entry->Bits.Present = 1; > > + } > > > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, > PageDirectory1GEntry++, PageAddress += SIZE_1GB) { > > - if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > > - Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize); > > - } else { > > - // > > - // Fill in the Page Directory entries > > - // > > - PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > > - PageDirectory1GEntry->Bits.ReadWrite = 1; > > - PageDirectory1GEntry->Bits.Present = 1; > > - PageDirectory1GEntry->Bits.MustBe1 = 1; > > - } > > - } > > - } else { > > - for (IndexOfPdpEntries = 0; IndexOfPdpEntries < NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, > PageDirectoryPointerEntry++) { > > - // > > - // Each Directory Pointer entries points to a page of Page Directory entires. > > - // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop. > > - // > > - PageDirectoryEntry = (VOID *) BigPageAddress; > > - BigPageAddress += SIZE_4KB; > > + for ( IndexOfPml4Entries = 0 > > + ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512) > > + ; IndexOfPml4Entries++, PageMapLevel4Entry++) { > > + // > > + // Each PML4 entry points to a page of Page Directory Pointer entires. > > + // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop. > > + // > > + PageDirectoryPointerEntry = (VOID *) BigPageAddress; > > + BigPageAddress += SIZE_4KB; > > > > - // > > - // Fill in a Page Directory Pointer Entries > > - // > > - PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > > - PageDirectoryPointerEntry->Bits.ReadWrite = 1; > > - PageDirectoryPointerEntry->Bits.Present = 1; > > + // > > + // Make a PML4 Entry > > + // > > + PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > > + PageMapLevel4Entry->Bits.ReadWrite = 1; > > + PageMapLevel4Entry->Bits.Present = 1; > > > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, > PageDirectoryEntry++, PageAddress += SIZE_2MB) { > > - if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > > - // > > - // Need to split this 2M page that covers NULL or stack range. > > - // > > - Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize); > > + if (Page1GSupport) { > > + PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > > + > > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, > PageDirectory1GEntry++, PageAddress += SIZE_1GB) { > > + if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > > + Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize); > > } else { > > // > > // Fill in the Page Directory entries > > // > > - PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > > - PageDirectoryEntry->Bits.ReadWrite = 1; > > - PageDirectoryEntry->Bits.Present = 1; > > - PageDirectoryEntry->Bits.MustBe1 = 1; > > + PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > > + PageDirectory1GEntry->Bits.ReadWrite = 1; > > + PageDirectory1GEntry->Bits.Present = 1; > > + PageDirectory1GEntry->Bits.MustBe1 = 1; > > } > > } > > - } > > + } else { > > + for ( IndexOfPdpEntries = 0 > > + ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512) > > + ; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > > + // > > + // Each Directory Pointer entries points to a page of Page Directory entires. > > + // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop. > > + // > > + PageDirectoryEntry = (VOID *) BigPageAddress; > > + BigPageAddress += SIZE_4KB; > > > > - for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > > - ZeroMem ( > > - PageDirectoryPointerEntry, > > - sizeof(PAGE_MAP_AND_DIRECTORY_POINTER) > > - ); > > + // > > + // Fill in a Page Directory Pointer Entries > > + // > > + PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > > + PageDirectoryPointerEntry->Bits.ReadWrite = 1; > > + PageDirectoryPointerEntry->Bits.Present = 1; > > + > > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, > PageDirectoryEntry++, PageAddress += SIZE_2MB) { > > + if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > > + // > > + // Need to split this 2M page that covers NULL or stack range. > > + // > > + Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize); > > + } else { > > + // > > + // Fill in the Page Directory entries > > + // > > + PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > > + PageDirectoryEntry->Bits.ReadWrite = 1; > > + PageDirectoryEntry->Bits.Present = 1; > > + PageDirectoryEntry->Bits.MustBe1 = 1; > > + } > > + } > > + } > > + > > + // > > + // Fill with null entry for unused PDPTE > > + // > > + ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * sizeof(PAGE_MAP_AND_DIRECTORY_POINTER)); > > } > > } > > + > > + // > > + // For the PML4 entries we are not using fill in a null entry. > > + // > > + ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)); > > } > > > > - // > > - // For the PML4 entries we are not using fill in a null entry. > > - // > > - for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, PageMapLevel4Entry++) { > > - ZeroMem ( > > - PageMapLevel4Entry, > > - sizeof (PAGE_MAP_AND_DIRECTORY_POINTER) > > - ); > > + if (Page5LevelSupport) { > > + Cr4.UintN = AsmReadCr4 (); > > + Cr4.Bits.LA57 = 1; > > + AsmWriteCr4 (Cr4.UintN); > > + // > > + // For the PML5 entries we are not using fill in a null entry. > > + // > > + ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)); > > } > > > > // > > > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode 2019-07-23 15:29 ` Ni, Ray @ 2019-07-23 19:20 ` Laszlo Ersek 2019-07-23 23:54 ` Michael D Kinney 0 siblings, 1 reply; 23+ messages in thread From: Laszlo Ersek @ 2019-07-23 19:20 UTC (permalink / raw) To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric On 07/23/19 17:29, Ni, Ray wrote: > > >> -----Original Message----- >> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Laszlo Ersek >> Sent: Tuesday, July 23, 2019 5:46 PM >> To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com> >> Cc: Dong, Eric <eric.dong@intel.com> >> Subject: Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode >> >> On 07/22/19 10:15, Ni, Ray wrote: >>> REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 >>> >>> DxeIpl is responsible to create page table for DXE phase running >>> either in long mode or in 32bit mode with certain protection >>> mechanism enabled (refer to ToBuildPageTable()). >>> >>> The patch updates DxeIpl to create 5-level page table for DXE phase >>> running in long mode when PcdUse5LevelPageTable is TRUE and CPU >>> supports 5-level page table. >>> >>> Signed-off-by: Ray Ni <ray.ni@intel.com> >>> Cc: Eric Dong <eric.dong@intel.com> >>> Cc: Laszlo Ersek <lersek@redhat.com> >>> --- >>> MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + >>> .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ >>> 2 files changed, 151 insertions(+), 77 deletions(-) >>> >>> diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >>> index abc3217b01..98bc17fc9d 100644 >>> --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >>> +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf >>> @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] >>> gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask ## CONSUMES >>> gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask ## CONSUMES >>> gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## CONSUMES >>> + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## SOMETIMES_CONSUMES >>> >>> [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] >>> gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## SOMETIMES_CONSUMES >>> diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >>> index edc38e4525..a5bcdcc734 100644 >>> --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >>> +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c >>> @@ -15,7 +15,7 @@ >>> 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume 2:Instruction Set Reference, Intel >>> 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume 3:System Programmer's Guide, Intel >>> >>> -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR> >>> +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR> >>> Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> >>> >>> SPDX-License-Identifier: BSD-2-Clause-Patent >>> @@ -626,14 +626,19 @@ CreateIdentityMappingPageTables ( >>> ) >>> { >>> UINT32 RegEax; >>> + UINT32 RegEbx; >>> + UINT32 RegEcx; >>> UINT32 RegEdx; >>> UINT8 PhysicalAddressBits; >>> EFI_PHYSICAL_ADDRESS PageAddress; >>> + UINTN IndexOfPml5Entries; >>> UINTN IndexOfPml4Entries; >>> UINTN IndexOfPdpEntries; >>> UINTN IndexOfPageDirectoryEntries; >>> + UINT32 NumberOfPml5EntriesNeeded; >>> UINT32 NumberOfPml4EntriesNeeded; >>> UINT32 NumberOfPdpEntriesNeeded; >>> + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry; >>> PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry; >>> PAGE_MAP_AND_DIRECTORY_POINTER *PageMap; >>> PAGE_MAP_AND_DIRECTORY_POINTER *PageDirectoryPointerEntry; >>> @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables ( >>> UINTN TotalPagesNum; >>> UINTN BigPageAddress; >>> VOID *Hob; >>> + BOOLEAN Page5LevelSupport; >>> BOOLEAN Page1GSupport; >>> PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry; >>> UINT64 AddressEncMask; >>> + IA32_CR4 Cr4; >>> >>> // >>> // Make sure AddressEncMask is contained to smallest supported address field >>> @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables ( >>> } >>> } >>> >>> + Page5LevelSupport = FALSE; >>> + if (PcdGetBool (PcdUse5LevelPageTable)) { >>> + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx); >>> + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, RegEbx, RegEcx, RegEdx)); >>> + if ((RegEcx & BIT16) != 0) { >> >> (1) Would it be possible to use macro names here, for Index and SubIndex >> in AsmCpuidEx(), and a "better" macro name than BIT16, in the RegEcx check? > > There are macros which are defined in UefiCpuPkg for the CPUID leaf functions > and structures for the CPUID output data. > But since there is rule that MdeModulePkg cannot depend on UefiCpuPkg so > the code cannot use those macros/structures. > > I am thinking of a new rule that UefiCpuPkg and MdeModulePkg can depend on each other. > I haven't talked with Mike on this. Not sure if he is ok on this. > For this case, yes I can temporary define 2 macros: one for 0x7, the other for BIT16. I'm aware of this restriction. I think it's a good one; personally I wouldn't like MdeModulePkg to depend on UefiCpuPkg. For example, while MdeModulePkg is very necessary for AARCH64 platforms, UefiCpuPkg doesn't appear necessary for AARCH64 platforms. If both MdeModulePkg and UefiCpuPkg depend on such macros, then the macros should arguably be defined in "MdeModulePkg/Include/IndustryStandard", or even "MdePkg/Include/IndustryStandard". The registers and bit-fields come from published industry specifications. (Published by Intel :) ) >>> + Page5LevelSupport = TRUE; >>> + } >>> + } >>> + >>> + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage = %d/%d/%d\n", PhysicalAddressBits, Page5LevelSupport, >> Page1GSupport)); >>> + >> >> (2) Can we format this log message as: >> >> AddressBits=%d 5LevelPaging=%d 1GPage=%d > > ok. > >> >> ? >> >>> // >>> - // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses. >>> + // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses >>> + // when 5-Level Paging is disabled, >>> + // due to either unsupported by HW, or disabled by PCD. >>> // >>> ASSERT (PhysicalAddressBits <= 52); >>> - if (PhysicalAddressBits > 48) { >>> + if (!Page5LevelSupport && PhysicalAddressBits > 48) { >>> PhysicalAddressBits = 48; >>> } >>> >>> // >>> // Calculate the table entries needed. >>> // >>> - if (PhysicalAddressBits <= 39 ) { >>> - NumberOfPml4EntriesNeeded = 1; >>> - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 30)); >>> - } else { >>> - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 39)); >>> - NumberOfPdpEntriesNeeded = 512; >>> + NumberOfPml5EntriesNeeded = 1; >>> + if (PhysicalAddressBits > 48) { >>> + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 48); >>> + PhysicalAddressBits = 48; >>> } >>> >>> + NumberOfPml4EntriesNeeded = 1; >>> + if (PhysicalAddressBits > 39) { >>> + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 39); >>> + PhysicalAddressBits = 39; >>> + } >>> + >>> + NumberOfPdpEntriesNeeded = 1; >>> + ASSERT (PhysicalAddressBits > 30); >>> + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 30); >>> + >>> // >>> // Pre-allocate big pages to avoid later allocations. >>> // >>> if (!Page1GSupport) { >>> - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1; >>> + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded >> + 1; >>> } else { >>> - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; >>> + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; >>> + } >>> + >>> + // >>> + // Substract the one page occupied by PML5 entries if 5-Level Paging is disabled. >>> + // >>> + if (!Page5LevelSupport) { >>> + TotalPagesNum--; >>> } >>> + >>> + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n", >>> + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded, >>> + NumberOfPdpEntriesNeeded, TotalPagesNum)); >>> + >> >> (3) Same comment about log message formatting as in (2). >> >> (4) Please log UINT32 values with %u, not %d. >> >> (5) Please log UINTN values with %Lu (and cast them to UINT64 first). > > > ok to above 3, 4, 5. > >> >> (6) More generally, can we please replace this patch with three patches, >> in the series? >> >> - the first patch should extract the TotalPagesNum calculation to a >> separate *library* function (it should be a public function in a BASE or >> PEIM library) >> >> - the second patch should extend the TotalPagesNum calculation in the >> library function to 5-level paging >> >> - the third patch should be the remaining code from the present patch. >> >> Here's why: in OvmfPkg/PlatformPei, the GetPeiMemoryCap() function >> duplicates this calculation. In OVMF, PEI-phase memory allocations can >> be dominated by the page tables built for 64-bit DXE, and so >> OvmfPkg/PlatformPei must know, for sizing the permanent PEI RAM, how >> much RAM the DXE IPL PEIM will need for those page tables. >> >> I would *really* dislike copying this update (for 5-level paging) from >> CreateIdentityMappingPageTables() to GetPeiMemoryCap(). Instead, the >> calculation should be extracted to a library function, and then OVMF >> (and all other platforms affected similarly) could call the new function. >> >> If this is not feasible or very much out of scope, then I guess we'll >> just keep the PCD as FALSE in OVMF, for the time being. >> > > BZ https://bugzilla.tianocore.org/show_bug.cgi?id=847 may be what > you need. Oh, indeed. I'm subscribed to this BZ, but the last update was in Jan 2018 :) > Extracting a library API to return how many pages are needed for page tables > needs to consider several cases: > 1. 4K or 2M or 1G page size is used. > 2. range of physical memory > 3. page table level > > But right now, this is not in the high priority in my to-do list. > I can help to change GetPeiMemoryCap() for short-term needs. > Are you ok with this? No, I'm not. Obviously, I'm not trying to force you to abstract out the library in question, just for OVMF's sake. What I'm saying is that I strongly prefer delaying 5-level paging support in OVMF until this library becomes available, over duplicating yet more code (and complex code, at that) from other Packages to OvmfPkg. From that perspective, I appreciate the new PCD very much, because it should make this postponing easy. In other words, there is no short-term need. Thank you! Laszlo ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode 2019-07-23 19:20 ` Laszlo Ersek @ 2019-07-23 23:54 ` Michael D Kinney 2019-07-24 1:40 ` Ni, Ray 0 siblings, 1 reply; 23+ messages in thread From: Michael D Kinney @ 2019-07-23 23:54 UTC (permalink / raw) To: devel@edk2.groups.io, lersek@redhat.com, Ni, Ray, Kinney, Michael D Cc: Dong, Eric Laszlo, There already a few examples in MdePkg/Include/Library/BaseLib.h. For example, the bit field structures for CR0, CR4, EFLAGS, and a segment descriptor are in that .h file. These are all within: #if defined (MDE_CPU_IA32) || defined (MDE_CPU_X64) . . . #endif We have since used a standard way to provide .h files for registers. Best example is in UefiCpuPkg/Include/Register. It may make sense to put the register definitions required by MdePkg and MdeModulePkg in MdePkg/Include/Register, and files that use those register types can include the required register definition include files. Best regards, Mike > -----Original Message----- > From: devel@edk2.groups.io > [mailto:devel@edk2.groups.io] On Behalf Of Laszlo Ersek > Sent: Tuesday, July 23, 2019 12:20 PM > To: Ni, Ray <ray.ni@intel.com>; devel@edk2.groups.io > Cc: Dong, Eric <eric.dong@intel.com> > Subject: Re: [edk2-devel] [PATCH 4/4] > MdeModulePkg/DxeIpl: Create 5-level page table for long > mode > > On 07/23/19 17:29, Ni, Ray wrote: > > > > > >> -----Original Message----- > >> From: devel@edk2.groups.io <devel@edk2.groups.io> On > Behalf Of Laszlo > >> Ersek > >> Sent: Tuesday, July 23, 2019 5:46 PM > >> To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com> > >> Cc: Dong, Eric <eric.dong@intel.com> > >> Subject: Re: [edk2-devel] [PATCH 4/4] > MdeModulePkg/DxeIpl: Create > >> 5-level page table for long mode > >> > >> On 07/22/19 10:15, Ni, Ray wrote: > >>> REF: > https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > >>> > >>> DxeIpl is responsible to create page table for DXE > phase running > >>> either in long mode or in 32bit mode with certain > protection > >>> mechanism enabled (refer to ToBuildPageTable()). > >>> > >>> The patch updates DxeIpl to create 5-level page > table for DXE phase > >>> running in long mode when PcdUse5LevelPageTable is > TRUE and CPU > >>> supports 5-level page table. > >>> > >>> Signed-off-by: Ray Ni <ray.ni@intel.com> > >>> Cc: Eric Dong <eric.dong@intel.com> > >>> Cc: Laszlo Ersek <lersek@redhat.com> > >>> --- > >>> MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | > 1 + > >>> .../Core/DxeIplPeim/X64/VirtualMemory.c | > 227 ++++++++++++------ > >>> 2 files changed, 151 insertions(+), 77 deletions(- > ) > >>> > >>> diff --git > a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > >>> b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > >>> index abc3217b01..98bc17fc9d 100644 > >>> --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > >>> +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > >>> @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] > >>> > gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionP > ropertyMask ## CONSUMES > >>> > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask > ## CONSUMES > >>> gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard > ## CONSUMES > >>> + > gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable > ## SOMETIMES_CONSUMES > >>> > >>> [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] > >>> gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack > ## SOMETIMES_CONSUMES > >>> diff --git > a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > >>> b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > >>> index edc38e4525..a5bcdcc734 100644 > >>> --- > a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > >>> +++ > b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > >>> @@ -15,7 +15,7 @@ > >>> 2) IA-32 Intel(R) Architecture Software > Developer's Manual Volume 2:Instruction Set Reference, > Intel > >>> 3) IA-32 Intel(R) Architecture Software > Developer's Manual > >>> Volume 3:System Programmer's Guide, Intel > >>> > >>> -Copyright (c) 2006 - 2018, Intel Corporation. All > rights > >>> reserved.<BR> > >>> +Copyright (c) 2006 - 2019, Intel Corporation. All > rights > >>> +reserved.<BR> > >>> Copyright (c) 2017, AMD Incorporated. All rights > reserved.<BR> > >>> > >>> SPDX-License-Identifier: BSD-2-Clause-Patent @@ - > 626,14 +626,19 @@ > >>> CreateIdentityMappingPageTables ( > >>> ) > >>> { > >>> UINT32 > RegEax; > >>> + UINT32 > RegEbx; > >>> + UINT32 > RegEcx; > >>> UINT32 > RegEdx; > >>> UINT8 > PhysicalAddressBits; > >>> EFI_PHYSICAL_ADDRESS > PageAddress; > >>> + UINTN > IndexOfPml5Entries; > >>> UINTN > IndexOfPml4Entries; > >>> UINTN > IndexOfPdpEntries; > >>> UINTN > IndexOfPageDirectoryEntries; > >>> + UINT32 > NumberOfPml5EntriesNeeded; > >>> UINT32 > NumberOfPml4EntriesNeeded; > >>> UINT32 > NumberOfPdpEntriesNeeded; > >>> + PAGE_MAP_AND_DIRECTORY_POINTER > *PageMapLevel5Entry; > >>> PAGE_MAP_AND_DIRECTORY_POINTER > *PageMapLevel4Entry; > >>> PAGE_MAP_AND_DIRECTORY_POINTER > *PageMap; > >>> PAGE_MAP_AND_DIRECTORY_POINTER > *PageDirectoryPointerEntry; > >>> @@ -641,9 +646,11 @@ > CreateIdentityMappingPageTables ( > >>> UINTN > TotalPagesNum; > >>> UINTN > BigPageAddress; > >>> VOID > *Hob; > >>> + BOOLEAN > Page5LevelSupport; > >>> BOOLEAN > Page1GSupport; > >>> PAGE_TABLE_1G_ENTRY > *PageDirectory1GEntry; > >>> UINT64 > AddressEncMask; > >>> + IA32_CR4 > Cr4; > >>> > >>> // > >>> // Make sure AddressEncMask is contained to > smallest supported > >>> address field @@ -677,33 +684,66 @@ > CreateIdentityMappingPageTables ( > >>> } > >>> } > >>> > >>> + Page5LevelSupport = FALSE; > >>> + if (PcdGetBool (PcdUse5LevelPageTable)) { > >>> + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, > &RegEdx); > >>> + DEBUG ((DEBUG_INFO, "Cpuid(7/0): > %08x/%08x/%08x/%08x\n", RegEax, RegEbx, RegEcx, > RegEdx)); > >>> + if ((RegEcx & BIT16) != 0) { > >> > >> (1) Would it be possible to use macro names here, > for Index and > >> SubIndex in AsmCpuidEx(), and a "better" macro name > than BIT16, in the RegEcx check? > > > > There are macros which are defined in UefiCpuPkg for > the CPUID leaf > > functions and structures for the CPUID output data. > > But since there is rule that MdeModulePkg cannot > depend on UefiCpuPkg > > so the code cannot use those macros/structures. > > > > I am thinking of a new rule that UefiCpuPkg and > MdeModulePkg can depend on each other. > > I haven't talked with Mike on this. Not sure if he is > ok on this. > > For this case, yes I can temporary define 2 macros: > one for 0x7, the other for BIT16. > > I'm aware of this restriction. I think it's a good one; > personally I wouldn't like MdeModulePkg to depend on > UefiCpuPkg. For example, while MdeModulePkg is very > necessary for AARCH64 platforms, UefiCpuPkg doesn't > appear necessary for AARCH64 platforms. > > If both MdeModulePkg and UefiCpuPkg depend on such > macros, then the macros should arguably be defined in > "MdeModulePkg/Include/IndustryStandard", or even > "MdePkg/Include/IndustryStandard". The registers and > bit-fields come from published industry specifications. > (Published by Intel :) ) > > >>> + Page5LevelSupport = TRUE; > >>> + } > >>> + } > >>> + > >>> + DEBUG ((DEBUG_INFO, > "AddressBits/5LevelPaging/1GPage = > >>> + %d/%d/%d\n", PhysicalAddressBits, > Page5LevelSupport, > >> Page1GSupport)); > >>> + > >> > >> (2) Can we format this log message as: > >> > >> AddressBits=%d 5LevelPaging=%d 1GPage=%d > > > > ok. > > > >> > >> ? > >> > >>> // > >>> - // IA-32e paging translates 48-bit linear > addresses to 52-bit physical addresses. > >>> + // IA-32e paging translates 48-bit linear > addresses to 52-bit > >>> + physical addresses // when 5-Level Paging is > disabled, // due > >>> + to either unsupported by HW, or disabled by PCD. > >>> // > >>> ASSERT (PhysicalAddressBits <= 52); > >>> - if (PhysicalAddressBits > 48) { > >>> + if (!Page5LevelSupport && PhysicalAddressBits > > 48) { > >>> PhysicalAddressBits = 48; > >>> } > >>> > >>> // > >>> // Calculate the table entries needed. > >>> // > >>> - if (PhysicalAddressBits <= 39 ) { > >>> - NumberOfPml4EntriesNeeded = 1; > >>> - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 > (1, (PhysicalAddressBits - 30)); > >>> - } else { > >>> - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 > (1, (PhysicalAddressBits - 39)); > >>> - NumberOfPdpEntriesNeeded = 512; > >>> + NumberOfPml5EntriesNeeded = 1; > >>> + if (PhysicalAddressBits > 48) { > >>> + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 > (1, PhysicalAddressBits - 48); > >>> + PhysicalAddressBits = 48; > >>> } > >>> > >>> + NumberOfPml4EntriesNeeded = 1; > >>> + if (PhysicalAddressBits > 39) { > >>> + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 > (1, PhysicalAddressBits - 39); > >>> + PhysicalAddressBits = 39; > >>> + } > >>> + > >>> + NumberOfPdpEntriesNeeded = 1; > >>> + ASSERT (PhysicalAddressBits > 30); > NumberOfPdpEntriesNeeded = > >>> + (UINT32) LShiftU64 (1, PhysicalAddressBits - 30); > >>> + > >>> // > >>> // Pre-allocate big pages to avoid later > allocations. > >>> // > >>> if (!Page1GSupport) { > >>> - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) > * NumberOfPml4EntriesNeeded + 1; > >>> + TotalPagesNum = ((NumberOfPdpEntriesNeeded + > 1) * > >>> + NumberOfPml4EntriesNeeded + 1) * > NumberOfPml5EntriesNeeded > >> + 1; > >>> } else { > >>> - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; > >>> + TotalPagesNum = (NumberOfPml4EntriesNeeded + > 1) * > >>> + NumberOfPml5EntriesNeeded + 1; } > >>> + > >>> + // > >>> + // Substract the one page occupied by PML5 > entries if 5-Level Paging is disabled. > >>> + // > >>> + if (!Page5LevelSupport) { > >>> + TotalPagesNum--; > >>> } > >>> + > >>> + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = > %d/%d/%d/%d\n", > >>> + NumberOfPml5EntriesNeeded, > NumberOfPml4EntriesNeeded, > >>> + NumberOfPdpEntriesNeeded, TotalPagesNum)); > >>> + > >> > >> (3) Same comment about log message formatting as in > (2). > >> > >> (4) Please log UINT32 values with %u, not %d. > >> > >> (5) Please log UINTN values with %Lu (and cast them > to UINT64 first). > > > > > > ok to above 3, 4, 5. > > > >> > >> (6) More generally, can we please replace this patch > with three > >> patches, in the series? > >> > >> - the first patch should extract the TotalPagesNum > calculation to a > >> separate *library* function (it should be a public > function in a BASE > >> or PEIM library) > >> > >> - the second patch should extend the TotalPagesNum > calculation in the > >> library function to 5-level paging > >> > >> - the third patch should be the remaining code from > the present patch. > >> > >> Here's why: in OvmfPkg/PlatformPei, the > GetPeiMemoryCap() function > >> duplicates this calculation. In OVMF, PEI-phase > memory allocations > >> can be dominated by the page tables built for 64-bit > DXE, and so > >> OvmfPkg/PlatformPei must know, for sizing the > permanent PEI RAM, how > >> much RAM the DXE IPL PEIM will need for those page > tables. > >> > >> I would *really* dislike copying this update (for 5- > level paging) > >> from > >> CreateIdentityMappingPageTables() to > GetPeiMemoryCap(). Instead, the > >> calculation should be extracted to a library > function, and then OVMF > >> (and all other platforms affected similarly) could > call the new function. > >> > >> If this is not feasible or very much out of scope, > then I guess we'll > >> just keep the PCD as FALSE in OVMF, for the time > being. > >> > > > > BZ https://bugzilla.tianocore.org/show_bug.cgi?id=847 > may be what you > > need. > > Oh, indeed. I'm subscribed to this BZ, but the last > update was in Jan > 2018 :) > > > Extracting a library API to return how many pages are > needed for page > > tables needs to consider several cases: > > 1. 4K or 2M or 1G page size is used. > > 2. range of physical memory > > 3. page table level > > > > But right now, this is not in the high priority in my > to-do list. > > I can help to change GetPeiMemoryCap() for short-term > needs. > > Are you ok with this? > > No, I'm not. > > Obviously, I'm not trying to force you to abstract out > the library in question, just for OVMF's sake. What I'm > saying is that I strongly prefer delaying 5-level > paging support in OVMF until this library becomes > available, over duplicating yet more code (and complex > code, at > that) from other Packages to OvmfPkg. From that > perspective, I appreciate the new PCD very much, > because it should make this postponing easy. > > In other words, there is no short-term need. > > Thank you! > Laszlo > > ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode 2019-07-23 23:54 ` Michael D Kinney @ 2019-07-24 1:40 ` Ni, Ray 0 siblings, 0 replies; 23+ messages in thread From: Ni, Ray @ 2019-07-24 1:40 UTC (permalink / raw) To: Kinney, Michael D, devel@edk2.groups.io, lersek@redhat.com; +Cc: Dong, Eric Mike, Thanks for the suggestion. I will try to move Cpuid.h to MdePkg/Include/Register directory in V2 patch. Thanks, Ray > -----Original Message----- > From: Kinney, Michael D > Sent: Wednesday, July 24, 2019 7:54 AM > To: devel@edk2.groups.io; lersek@redhat.com; Ni, Ray <ray.ni@intel.com>; Kinney, Michael D <michael.d.kinney@intel.com> > Cc: Dong, Eric <eric.dong@intel.com> > Subject: RE: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode > > Laszlo, > > There already a few examples in MdePkg/Include/Library/BaseLib.h. > For example, the bit field structures for CR0, CR4, EFLAGS, > and a segment descriptor are in that .h file. These are all > within: > > #if defined (MDE_CPU_IA32) || defined (MDE_CPU_X64) > . . . > #endif > > We have since used a standard way to provide .h files for > registers. Best example is in UefiCpuPkg/Include/Register. > > It may make sense to put the register definitions required > by MdePkg and MdeModulePkg in MdePkg/Include/Register, and > files that use those register types can include the > required register definition include files. > > Best regards, > > Mike > > > -----Original Message----- > > From: devel@edk2.groups.io > > [mailto:devel@edk2.groups.io] On Behalf Of Laszlo Ersek > > Sent: Tuesday, July 23, 2019 12:20 PM > > To: Ni, Ray <ray.ni@intel.com>; devel@edk2.groups.io > > Cc: Dong, Eric <eric.dong@intel.com> > > Subject: Re: [edk2-devel] [PATCH 4/4] > > MdeModulePkg/DxeIpl: Create 5-level page table for long > > mode > > > > On 07/23/19 17:29, Ni, Ray wrote: > > > > > > > > >> -----Original Message----- > > >> From: devel@edk2.groups.io <devel@edk2.groups.io> On > > Behalf Of Laszlo > > >> Ersek > > >> Sent: Tuesday, July 23, 2019 5:46 PM > > >> To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com> > > >> Cc: Dong, Eric <eric.dong@intel.com> > > >> Subject: Re: [edk2-devel] [PATCH 4/4] > > MdeModulePkg/DxeIpl: Create > > >> 5-level page table for long mode > > >> > > >> On 07/22/19 10:15, Ni, Ray wrote: > > >>> REF: > > https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > >>> > > >>> DxeIpl is responsible to create page table for DXE > > phase running > > >>> either in long mode or in 32bit mode with certain > > protection > > >>> mechanism enabled (refer to ToBuildPageTable()). > > >>> > > >>> The patch updates DxeIpl to create 5-level page > > table for DXE phase > > >>> running in long mode when PcdUse5LevelPageTable is > > TRUE and CPU > > >>> supports 5-level page table. > > >>> > > >>> Signed-off-by: Ray Ni <ray.ni@intel.com> > > >>> Cc: Eric Dong <eric.dong@intel.com> > > >>> Cc: Laszlo Ersek <lersek@redhat.com> > > >>> --- > > >>> MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | > > 1 + > > >>> .../Core/DxeIplPeim/X64/VirtualMemory.c | > > 227 ++++++++++++------ > > >>> 2 files changed, 151 insertions(+), 77 deletions(- > > ) > > >>> > > >>> diff --git > > a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > > >>> b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > > >>> index abc3217b01..98bc17fc9d 100644 > > >>> --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > > >>> +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > > >>> @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] > > >>> > > gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionP > > ropertyMask ## CONSUMES > > >>> > > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask > > ## CONSUMES > > >>> gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard > > ## CONSUMES > > >>> + > > gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable > > ## SOMETIMES_CONSUMES > > >>> > > >>> [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] > > >>> gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack > > ## SOMETIMES_CONSUMES > > >>> diff --git > > a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > > >>> b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > > >>> index edc38e4525..a5bcdcc734 100644 > > >>> --- > > a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > > >>> +++ > > b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > > >>> @@ -15,7 +15,7 @@ > > >>> 2) IA-32 Intel(R) Architecture Software > > Developer's Manual Volume 2:Instruction Set Reference, > > Intel > > >>> 3) IA-32 Intel(R) Architecture Software > > Developer's Manual > > >>> Volume 3:System Programmer's Guide, Intel > > >>> > > >>> -Copyright (c) 2006 - 2018, Intel Corporation. All > > rights > > >>> reserved.<BR> > > >>> +Copyright (c) 2006 - 2019, Intel Corporation. All > > rights > > >>> +reserved.<BR> > > >>> Copyright (c) 2017, AMD Incorporated. All rights > > reserved.<BR> > > >>> > > >>> SPDX-License-Identifier: BSD-2-Clause-Patent @@ - > > 626,14 +626,19 @@ > > >>> CreateIdentityMappingPageTables ( > > >>> ) > > >>> { > > >>> UINT32 > > RegEax; > > >>> + UINT32 > > RegEbx; > > >>> + UINT32 > > RegEcx; > > >>> UINT32 > > RegEdx; > > >>> UINT8 > > PhysicalAddressBits; > > >>> EFI_PHYSICAL_ADDRESS > > PageAddress; > > >>> + UINTN > > IndexOfPml5Entries; > > >>> UINTN > > IndexOfPml4Entries; > > >>> UINTN > > IndexOfPdpEntries; > > >>> UINTN > > IndexOfPageDirectoryEntries; > > >>> + UINT32 > > NumberOfPml5EntriesNeeded; > > >>> UINT32 > > NumberOfPml4EntriesNeeded; > > >>> UINT32 > > NumberOfPdpEntriesNeeded; > > >>> + PAGE_MAP_AND_DIRECTORY_POINTER > > *PageMapLevel5Entry; > > >>> PAGE_MAP_AND_DIRECTORY_POINTER > > *PageMapLevel4Entry; > > >>> PAGE_MAP_AND_DIRECTORY_POINTER > > *PageMap; > > >>> PAGE_MAP_AND_DIRECTORY_POINTER > > *PageDirectoryPointerEntry; > > >>> @@ -641,9 +646,11 @@ > > CreateIdentityMappingPageTables ( > > >>> UINTN > > TotalPagesNum; > > >>> UINTN > > BigPageAddress; > > >>> VOID > > *Hob; > > >>> + BOOLEAN > > Page5LevelSupport; > > >>> BOOLEAN > > Page1GSupport; > > >>> PAGE_TABLE_1G_ENTRY > > *PageDirectory1GEntry; > > >>> UINT64 > > AddressEncMask; > > >>> + IA32_CR4 > > Cr4; > > >>> > > >>> // > > >>> // Make sure AddressEncMask is contained to > > smallest supported > > >>> address field @@ -677,33 +684,66 @@ > > CreateIdentityMappingPageTables ( > > >>> } > > >>> } > > >>> > > >>> + Page5LevelSupport = FALSE; > > >>> + if (PcdGetBool (PcdUse5LevelPageTable)) { > > >>> + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, > > &RegEdx); > > >>> + DEBUG ((DEBUG_INFO, "Cpuid(7/0): > > %08x/%08x/%08x/%08x\n", RegEax, RegEbx, RegEcx, > > RegEdx)); > > >>> + if ((RegEcx & BIT16) != 0) { > > >> > > >> (1) Would it be possible to use macro names here, > > for Index and > > >> SubIndex in AsmCpuidEx(), and a "better" macro name > > than BIT16, in the RegEcx check? > > > > > > There are macros which are defined in UefiCpuPkg for > > the CPUID leaf > > > functions and structures for the CPUID output data. > > > But since there is rule that MdeModulePkg cannot > > depend on UefiCpuPkg > > > so the code cannot use those macros/structures. > > > > > > I am thinking of a new rule that UefiCpuPkg and > > MdeModulePkg can depend on each other. > > > I haven't talked with Mike on this. Not sure if he is > > ok on this. > > > For this case, yes I can temporary define 2 macros: > > one for 0x7, the other for BIT16. > > > > I'm aware of this restriction. I think it's a good one; > > personally I wouldn't like MdeModulePkg to depend on > > UefiCpuPkg. For example, while MdeModulePkg is very > > necessary for AARCH64 platforms, UefiCpuPkg doesn't > > appear necessary for AARCH64 platforms. > > > > If both MdeModulePkg and UefiCpuPkg depend on such > > macros, then the macros should arguably be defined in > > "MdeModulePkg/Include/IndustryStandard", or even > > "MdePkg/Include/IndustryStandard". The registers and > > bit-fields come from published industry specifications. > > (Published by Intel :) ) > > > > >>> + Page5LevelSupport = TRUE; > > >>> + } > > >>> + } > > >>> + > > >>> + DEBUG ((DEBUG_INFO, > > "AddressBits/5LevelPaging/1GPage = > > >>> + %d/%d/%d\n", PhysicalAddressBits, > > Page5LevelSupport, > > >> Page1GSupport)); > > >>> + > > >> > > >> (2) Can we format this log message as: > > >> > > >> AddressBits=%d 5LevelPaging=%d 1GPage=%d > > > > > > ok. > > > > > >> > > >> ? > > >> > > >>> // > > >>> - // IA-32e paging translates 48-bit linear > > addresses to 52-bit physical addresses. > > >>> + // IA-32e paging translates 48-bit linear > > addresses to 52-bit > > >>> + physical addresses // when 5-Level Paging is > > disabled, // due > > >>> + to either unsupported by HW, or disabled by PCD. > > >>> // > > >>> ASSERT (PhysicalAddressBits <= 52); > > >>> - if (PhysicalAddressBits > 48) { > > >>> + if (!Page5LevelSupport && PhysicalAddressBits > > > 48) { > > >>> PhysicalAddressBits = 48; > > >>> } > > >>> > > >>> // > > >>> // Calculate the table entries needed. > > >>> // > > >>> - if (PhysicalAddressBits <= 39 ) { > > >>> - NumberOfPml4EntriesNeeded = 1; > > >>> - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 > > (1, (PhysicalAddressBits - 30)); > > >>> - } else { > > >>> - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 > > (1, (PhysicalAddressBits - 39)); > > >>> - NumberOfPdpEntriesNeeded = 512; > > >>> + NumberOfPml5EntriesNeeded = 1; > > >>> + if (PhysicalAddressBits > 48) { > > >>> + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 > > (1, PhysicalAddressBits - 48); > > >>> + PhysicalAddressBits = 48; > > >>> } > > >>> > > >>> + NumberOfPml4EntriesNeeded = 1; > > >>> + if (PhysicalAddressBits > 39) { > > >>> + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 > > (1, PhysicalAddressBits - 39); > > >>> + PhysicalAddressBits = 39; > > >>> + } > > >>> + > > >>> + NumberOfPdpEntriesNeeded = 1; > > >>> + ASSERT (PhysicalAddressBits > 30); > > NumberOfPdpEntriesNeeded = > > >>> + (UINT32) LShiftU64 (1, PhysicalAddressBits - 30); > > >>> + > > >>> // > > >>> // Pre-allocate big pages to avoid later > > allocations. > > >>> // > > >>> if (!Page1GSupport) { > > >>> - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) > > * NumberOfPml4EntriesNeeded + 1; > > >>> + TotalPagesNum = ((NumberOfPdpEntriesNeeded + > > 1) * > > >>> + NumberOfPml4EntriesNeeded + 1) * > > NumberOfPml5EntriesNeeded > > >> + 1; > > >>> } else { > > >>> - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; > > >>> + TotalPagesNum = (NumberOfPml4EntriesNeeded + > > 1) * > > >>> + NumberOfPml5EntriesNeeded + 1; } > > >>> + > > >>> + // > > >>> + // Substract the one page occupied by PML5 > > entries if 5-Level Paging is disabled. > > >>> + // > > >>> + if (!Page5LevelSupport) { > > >>> + TotalPagesNum--; > > >>> } > > >>> + > > >>> + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = > > %d/%d/%d/%d\n", > > >>> + NumberOfPml5EntriesNeeded, > > NumberOfPml4EntriesNeeded, > > >>> + NumberOfPdpEntriesNeeded, TotalPagesNum)); > > >>> + > > >> > > >> (3) Same comment about log message formatting as in > > (2). > > >> > > >> (4) Please log UINT32 values with %u, not %d. > > >> > > >> (5) Please log UINTN values with %Lu (and cast them > > to UINT64 first). > > > > > > > > > ok to above 3, 4, 5. > > > > > >> > > >> (6) More generally, can we please replace this patch > > with three > > >> patches, in the series? > > >> > > >> - the first patch should extract the TotalPagesNum > > calculation to a > > >> separate *library* function (it should be a public > > function in a BASE > > >> or PEIM library) > > >> > > >> - the second patch should extend the TotalPagesNum > > calculation in the > > >> library function to 5-level paging > > >> > > >> - the third patch should be the remaining code from > > the present patch. > > >> > > >> Here's why: in OvmfPkg/PlatformPei, the > > GetPeiMemoryCap() function > > >> duplicates this calculation. In OVMF, PEI-phase > > memory allocations > > >> can be dominated by the page tables built for 64-bit > > DXE, and so > > >> OvmfPkg/PlatformPei must know, for sizing the > > permanent PEI RAM, how > > >> much RAM the DXE IPL PEIM will need for those page > > tables. > > >> > > >> I would *really* dislike copying this update (for 5- > > level paging) > > >> from > > >> CreateIdentityMappingPageTables() to > > GetPeiMemoryCap(). Instead, the > > >> calculation should be extracted to a library > > function, and then OVMF > > >> (and all other platforms affected similarly) could > > call the new function. > > >> > > >> If this is not feasible or very much out of scope, > > then I guess we'll > > >> just keep the PCD as FALSE in OVMF, for the time > > being. > > >> > > > > > > BZ https://bugzilla.tianocore.org/show_bug.cgi?id=847 > > may be what you > > > need. > > > > Oh, indeed. I'm subscribed to this BZ, but the last > > update was in Jan > > 2018 :) > > > > > Extracting a library API to return how many pages are > > needed for page > > > tables needs to consider several cases: > > > 1. 4K or 2M or 1G page size is used. > > > 2. range of physical memory > > > 3. page table level > > > > > > But right now, this is not in the high priority in my > > to-do list. > > > I can help to change GetPeiMemoryCap() for short-term > > needs. > > > Are you ok with this? > > > > No, I'm not. > > > > Obviously, I'm not trying to force you to abstract out > > the library in question, just for OVMF's sake. What I'm > > saying is that I strongly prefer delaying 5-level > > paging support in OVMF until this library becomes > > available, over duplicating yet more code (and complex > > code, at > > that) from other Packages to OvmfPkg. From that > > perspective, I appreciate the new PCD very much, > > because it should make this postponing easy. > > > > In other words, there is no short-term need. > > > > Thank you! > > Laszlo > > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
[parent not found: <15B3ACB4E8DDF416.7925@groups.io>]
* Re: [edk2-devel] [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable [not found] ` <15B3ACB4E8DDF416.7925@groups.io> @ 2019-07-22 8:28 ` Ni, Ray 0 siblings, 0 replies; 23+ messages in thread From: Ni, Ray @ 2019-07-22 8:28 UTC (permalink / raw) To: devel@edk2.groups.io, Ni, Ray Cc: Dong, Eric, Laszlo Ersek, Wang, Jian J, Wu, Hao A Forgot to include MdeModulePkg maintainers for review. > -----Original Message----- > From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Ni, Ray > Sent: Monday, July 22, 2019 4:16 PM > To: devel@edk2.groups.io > Cc: Dong, Eric <eric.dong@intel.com>; Laszlo Ersek <lersek@redhat.com> > Subject: [edk2-devel] [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD > PcdUse5LevelPageTable > > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > The PCD indicates if 5-Level Paging will be enabled in long mode. > 5-Level Paging will not be enabled when the PCD is TRUE but CPU doesn't > support 5-Level Paging. > > Signed-off-by: Ray Ni <ray.ni@intel.com> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Laszlo Ersek <lersek@redhat.com> > --- > MdeModulePkg/MdeModulePkg.dec | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/MdeModulePkg/MdeModulePkg.dec > b/MdeModulePkg/MdeModulePkg.dec index 12e0bbf579..21388595a9 > 100644 > --- a/MdeModulePkg/MdeModulePkg.dec > +++ b/MdeModulePkg/MdeModulePkg.dec > @@ -1991,6 +1991,13 @@ [PcdsFixedAtBuild, PcdsPatchableInModule, > PcdsDynamic, PcdsDynamicEx] > # @Prompt The address mask when memory encryption is enabled. > > gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM > ask|0x0|UINT64|0x30001047 > > + ## Indicates if 5-Level Paging will be enabled in long mode. 5-Level > + Paging will not be enabled # when the PCD is TRUE but CPU doesn't > support 5-Level Paging. > + # TRUE - 5-Level Paging will be enabled.<BR> > + # FALSE - 5-Level Paging will not be enabled.<BR> > + # @Prompt Enable 5-Level Paging support in long mode. > + > + > gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable|FALSE|BOOLE > AN|0x0 > + 001105F > + > ## Capsule In Ram is to use memory to deliver the capsules that will be > processed after system > # reset.<BR><BR> > # This PCD indicates if the Capsule In Ram is supported.<BR> > -- > 2.21.0.windows.1 > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
[parent not found: <15B3ACB536E52165.29669@groups.io>]
* Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode [not found] ` <15B3ACB536E52165.29669@groups.io> @ 2019-07-22 8:28 ` Ni, Ray 0 siblings, 0 replies; 23+ messages in thread From: Ni, Ray @ 2019-07-22 8:28 UTC (permalink / raw) To: devel@edk2.groups.io, Ni, Ray Cc: Dong, Eric, Laszlo Ersek, Wu, Hao A, Wang, Jian J Forgot to include MdeModulePkg maintainers for review. > -----Original Message----- > From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Ni, Ray > Sent: Monday, July 22, 2019 4:16 PM > To: devel@edk2.groups.io > Cc: Dong, Eric <eric.dong@intel.com>; Laszlo Ersek <lersek@redhat.com> > Subject: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level > page table for long mode > > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > DxeIpl is responsible to create page table for DXE phase running either in > long mode or in 32bit mode with certain protection mechanism enabled > (refer to ToBuildPageTable()). > > The patch updates DxeIpl to create 5-level page table for DXE phase running > in long mode when PcdUse5LevelPageTable is TRUE and CPU supports 5- > level page table. > > Signed-off-by: Ray Ni <ray.ni@intel.com> > Cc: Eric Dong <eric.dong@intel.com> > Cc: Laszlo Ersek <lersek@redhat.com> > --- > MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + > .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ > 2 files changed, 151 insertions(+), 77 deletions(-) > > diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > index abc3217b01..98bc17fc9d 100644 > --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] > > gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask > ## CONSUMES > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask > ## CONSUMES > gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## > CONSUMES > + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## > SOMETIMES_CONSUMES > > [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] > gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## > SOMETIMES_CONSUMES > diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > index edc38e4525..a5bcdcc734 100644 > --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > @@ -15,7 +15,7 @@ > 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume > 2:Instruction Set Reference, Intel > 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume > 3:System Programmer's Guide, Intel > > -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR> > +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR> > Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> > > SPDX-License-Identifier: BSD-2-Clause-Patent @@ -626,14 +626,19 @@ > CreateIdentityMappingPageTables ( > ) > { > UINT32 RegEax; > + UINT32 RegEbx; > + UINT32 RegEcx; > UINT32 RegEdx; > UINT8 PhysicalAddressBits; > EFI_PHYSICAL_ADDRESS PageAddress; > + UINTN IndexOfPml5Entries; > UINTN IndexOfPml4Entries; > UINTN IndexOfPdpEntries; > UINTN IndexOfPageDirectoryEntries; > + UINT32 NumberOfPml5EntriesNeeded; > UINT32 NumberOfPml4EntriesNeeded; > UINT32 NumberOfPdpEntriesNeeded; > + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry; > PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry; > PAGE_MAP_AND_DIRECTORY_POINTER *PageMap; > PAGE_MAP_AND_DIRECTORY_POINTER > *PageDirectoryPointerEntry; > @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables ( > UINTN TotalPagesNum; > UINTN BigPageAddress; > VOID *Hob; > + BOOLEAN Page5LevelSupport; > BOOLEAN Page1GSupport; > PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry; > UINT64 AddressEncMask; > + IA32_CR4 Cr4; > > // > // Make sure AddressEncMask is contained to smallest supported address > field @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables ( > } > } > > + Page5LevelSupport = FALSE; > + if (PcdGetBool (PcdUse5LevelPageTable)) { > + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx); > + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, > RegEbx, RegEcx, RegEdx)); > + if ((RegEcx & BIT16) != 0) { > + Page5LevelSupport = TRUE; > + } > + } > + > + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage > = %d/%d/%d\n", > + PhysicalAddressBits, Page5LevelSupport, Page1GSupport)); > + > // > - // IA-32e paging translates 48-bit linear addresses to 52-bit physical > addresses. > + // IA-32e paging translates 48-bit linear addresses to 52-bit > + physical addresses // when 5-Level Paging is disabled, // due to > + either unsupported by HW, or disabled by PCD. > // > ASSERT (PhysicalAddressBits <= 52); > - if (PhysicalAddressBits > 48) { > + if (!Page5LevelSupport && PhysicalAddressBits > 48) { > PhysicalAddressBits = 48; > } > > // > // Calculate the table entries needed. > // > - if (PhysicalAddressBits <= 39 ) { > - NumberOfPml4EntriesNeeded = 1; > - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, > (PhysicalAddressBits - 30)); > - } else { > - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, > (PhysicalAddressBits - 39)); > - NumberOfPdpEntriesNeeded = 512; > + NumberOfPml5EntriesNeeded = 1; > + if (PhysicalAddressBits > 48) { > + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, > PhysicalAddressBits - 48); > + PhysicalAddressBits = 48; > } > > + NumberOfPml4EntriesNeeded = 1; > + if (PhysicalAddressBits > 39) { > + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, > PhysicalAddressBits - 39); > + PhysicalAddressBits = 39; > + } > + > + NumberOfPdpEntriesNeeded = 1; > + ASSERT (PhysicalAddressBits > 30); > + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits > + - 30); > + > // > // Pre-allocate big pages to avoid later allocations. > // > if (!Page1GSupport) { > - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * > NumberOfPml4EntriesNeeded + 1; > + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * > + NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; > } else { > - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; > + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * > + NumberOfPml5EntriesNeeded + 1; } > + > + // > + // Substract the one page occupied by PML5 entries if 5-Level Paging is > disabled. > + // > + if (!Page5LevelSupport) { > + TotalPagesNum--; > } > + > + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n", > + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded, > + NumberOfPdpEntriesNeeded, TotalPagesNum)); > + > BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum); > ASSERT (BigPageAddress != 0); > > @@ -711,92 +751,125 @@ CreateIdentityMappingPageTables ( > // By architecture only one PageMapLevel4 exists - so lets allocate storage > for it. > // > PageMap = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > - > - PageMapLevel4Entry = PageMap; > - PageAddress = 0; > - for (IndexOfPml4Entries = 0; IndexOfPml4Entries < > NumberOfPml4EntriesNeeded; IndexOfPml4Entries++, > PageMapLevel4Entry++) { > + if (Page5LevelSupport) { > // > - // Each PML4 entry points to a page of Page Directory Pointer entires. > - // So lets allocate space for them and fill them in in the IndexOfPdpEntries > loop. > + // By architecture only one PageMapLevel5 exists - so lets allocate storage > for it. > // > - PageDirectoryPointerEntry = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > + PageMapLevel5Entry = PageMap; > + BigPageAddress += SIZE_4KB; > + } > + PageAddress = 0; > > + for ( IndexOfPml5Entries = 0 > + ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded > + ; IndexOfPml5Entries++, PageMapLevel5Entry++) { > // > - // Make a PML4 Entry > + // Each PML5 entry points to a page of PML4 entires. > + // So lets allocate space for them and fill them in in the > IndexOfPml4Entries loop. > + // When 5-Level Paging is disabled, below allocation happens only once. > // > - PageMapLevel4Entry->Uint64 = > (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > - PageMapLevel4Entry->Bits.ReadWrite = 1; > - PageMapLevel4Entry->Bits.Present = 1; > + PageMapLevel4Entry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - if (Page1GSupport) { > - PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > + if (Page5LevelSupport) { > + // > + // Make a PML5 Entry > + // > + PageMapLevel5Entry->Uint64 = (UINT64) (UINTN) PageMapLevel4Entry; > + PageMapLevel5Entry->Bits.ReadWrite = 1; > + PageMapLevel5Entry->Bits.Present = 1; > + } > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; > IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += > SIZE_1GB) { > - if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > - Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, > StackBase, StackSize); > - } else { > - // > - // Fill in the Page Directory entries > - // > - PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | > AddressEncMask; > - PageDirectory1GEntry->Bits.ReadWrite = 1; > - PageDirectory1GEntry->Bits.Present = 1; > - PageDirectory1GEntry->Bits.MustBe1 = 1; > - } > - } > - } else { > - for (IndexOfPdpEntries = 0; IndexOfPdpEntries < > NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, > PageDirectoryPointerEntry++) { > - // > - // Each Directory Pointer entries points to a page of Page Directory > entires. > - // So allocate space for them and fill them in in the > IndexOfPageDirectoryEntries loop. > - // > - PageDirectoryEntry = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > + for ( IndexOfPml4Entries = 0 > + ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? > NumberOfPml4EntriesNeeded : 512) > + ; IndexOfPml4Entries++, PageMapLevel4Entry++) { > + // > + // Each PML4 entry points to a page of Page Directory Pointer entires. > + // So lets allocate space for them and fill them in in the > IndexOfPdpEntries loop. > + // > + PageDirectoryPointerEntry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - // > - // Fill in a Page Directory Pointer Entries > - // > - PageDirectoryPointerEntry->Uint64 = > (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > - PageDirectoryPointerEntry->Bits.ReadWrite = 1; > - PageDirectoryPointerEntry->Bits.Present = 1; > + // > + // Make a PML4 Entry > + // > + PageMapLevel4Entry->Uint64 = > (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > + PageMapLevel4Entry->Bits.ReadWrite = 1; > + PageMapLevel4Entry->Bits.Present = 1; > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < > 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += > SIZE_2MB) { > - if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > - // > - // Need to split this 2M page that covers NULL or stack range. > - // > - Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, > StackBase, StackSize); > + if (Page1GSupport) { > + PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > + > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < > 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress > += SIZE_1GB) { > + if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > + Split1GPageTo2M (PageAddress, (UINT64 *) > + PageDirectory1GEntry, StackBase, StackSize); > } else { > // > // Fill in the Page Directory entries > // > - PageDirectoryEntry->Uint64 = (UINT64)PageAddress | > AddressEncMask; > - PageDirectoryEntry->Bits.ReadWrite = 1; > - PageDirectoryEntry->Bits.Present = 1; > - PageDirectoryEntry->Bits.MustBe1 = 1; > + PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | > AddressEncMask; > + PageDirectory1GEntry->Bits.ReadWrite = 1; > + PageDirectory1GEntry->Bits.Present = 1; > + PageDirectory1GEntry->Bits.MustBe1 = 1; > } > } > - } > + } else { > + for ( IndexOfPdpEntries = 0 > + ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? > NumberOfPdpEntriesNeeded : 512) > + ; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > + // > + // Each Directory Pointer entries points to a page of Page Directory > entires. > + // So allocate space for them and fill them in in the > IndexOfPageDirectoryEntries loop. > + // > + PageDirectoryEntry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, > PageDirectoryPointerEntry++) { > - ZeroMem ( > - PageDirectoryPointerEntry, > - sizeof(PAGE_MAP_AND_DIRECTORY_POINTER) > - ); > + // > + // Fill in a Page Directory Pointer Entries > + // > + PageDirectoryPointerEntry->Uint64 = > (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > + PageDirectoryPointerEntry->Bits.ReadWrite = 1; > + PageDirectoryPointerEntry->Bits.Present = 1; > + > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < > 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += > SIZE_2MB) { > + if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > + // > + // Need to split this 2M page that covers NULL or stack range. > + // > + Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, > StackBase, StackSize); > + } else { > + // > + // Fill in the Page Directory entries > + // > + PageDirectoryEntry->Uint64 = (UINT64)PageAddress | > AddressEncMask; > + PageDirectoryEntry->Bits.ReadWrite = 1; > + PageDirectoryEntry->Bits.Present = 1; > + PageDirectoryEntry->Bits.MustBe1 = 1; > + } > + } > + } > + > + // > + // Fill with null entry for unused PDPTE > + // > + ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * > + sizeof(PAGE_MAP_AND_DIRECTORY_POINTER)); > } > } > + > + // > + // For the PML4 entries we are not using fill in a null entry. > + // > + ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof > + (PAGE_MAP_AND_DIRECTORY_POINTER)); > } > > - // > - // For the PML4 entries we are not using fill in a null entry. > - // > - for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, > PageMapLevel4Entry++) { > - ZeroMem ( > - PageMapLevel4Entry, > - sizeof (PAGE_MAP_AND_DIRECTORY_POINTER) > - ); > + if (Page5LevelSupport) { > + Cr4.UintN = AsmReadCr4 (); > + Cr4.Bits.LA57 = 1; > + AsmWriteCr4 (Cr4.UintN); > + // > + // For the PML5 entries we are not using fill in a null entry. > + // > + ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof > + (PAGE_MAP_AND_DIRECTORY_POINTER)); > } > > // > -- > 2.21.0.windows.1 > > > ^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2019-07-25 17:19 UTC | newest] Thread overview: 23+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2019-07-22 8:15 [PATCH 0/4] Support 5-level paging in DXE long mode Ni, Ray 2019-07-22 8:15 ` [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled Ni, Ray 2019-07-23 9:15 ` [edk2-devel] " Laszlo Ersek 2019-07-24 7:02 ` Ni, Ray 2019-07-22 8:15 ` [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing 5-level page table Ni, Ray 2019-07-23 9:22 ` [edk2-devel] " Laszlo Ersek 2019-07-24 7:03 ` Ni, Ray 2019-07-25 17:19 ` Laszlo Ersek 2019-07-22 8:15 ` [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable Ni, Ray 2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A 2019-07-23 14:20 ` Ni, Ray 2019-07-24 2:02 ` Dong, Eric 2019-07-22 8:15 ` [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode Ni, Ray 2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A 2019-07-23 7:48 ` Laszlo Ersek 2019-07-23 19:45 ` Singh, Brijesh 2019-07-23 9:46 ` Laszlo Ersek 2019-07-23 15:29 ` Ni, Ray 2019-07-23 19:20 ` Laszlo Ersek 2019-07-23 23:54 ` Michael D Kinney 2019-07-24 1:40 ` Ni, Ray [not found] ` <15B3ACB4E8DDF416.7925@groups.io> 2019-07-22 8:28 ` [edk2-devel] [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable Ni, Ray [not found] ` <15B3ACB536E52165.29669@groups.io> 2019-07-22 8:28 ` [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode Ni, Ray
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox