From: "Ni, Ray" <ray.ni@intel.com>
To: "devel@edk2.groups.io" <devel@edk2.groups.io>,
"lersek@redhat.com" <lersek@redhat.com>
Cc: "Dong, Eric" <eric.dong@intel.com>
Subject: Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode
Date: Tue, 23 Jul 2019 15:29:40 +0000 [thread overview]
Message-ID: <734D49CCEBEEF84792F5B80ED585239D5C2417DB@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <e3bc4b34-bd7a-d7ca-6935-aea1abbdad7b@redhat.com>
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Laszlo Ersek
> Sent: Tuesday, July 23, 2019 5:46 PM
> To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com>
> Cc: Dong, Eric <eric.dong@intel.com>
> Subject: Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode
>
> On 07/22/19 10:15, Ni, Ray wrote:
> > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008
> >
> > DxeIpl is responsible to create page table for DXE phase running
> > either in long mode or in 32bit mode with certain protection
> > mechanism enabled (refer to ToBuildPageTable()).
> >
> > The patch updates DxeIpl to create 5-level page table for DXE phase
> > running in long mode when PcdUse5LevelPageTable is TRUE and CPU
> > supports 5-level page table.
> >
> > Signed-off-by: Ray Ni <ray.ni@intel.com>
> > Cc: Eric Dong <eric.dong@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > ---
> > MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 +
> > .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------
> > 2 files changed, 151 insertions(+), 77 deletions(-)
> >
> > diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf
> > index abc3217b01..98bc17fc9d 100644
> > --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf
> > +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf
> > @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64]
> > gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask ## CONSUMES
> > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask ## CONSUMES
> > gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## CONSUMES
> > + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## SOMETIMES_CONSUMES
> >
> > [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64]
> > gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## SOMETIMES_CONSUMES
> > diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> > index edc38e4525..a5bcdcc734 100644
> > --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> > +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> > @@ -15,7 +15,7 @@
> > 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume 2:Instruction Set Reference, Intel
> > 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume 3:System Programmer's Guide, Intel
> >
> > -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
> > +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR>
> > Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> >
> > SPDX-License-Identifier: BSD-2-Clause-Patent
> > @@ -626,14 +626,19 @@ CreateIdentityMappingPageTables (
> > )
> > {
> > UINT32 RegEax;
> > + UINT32 RegEbx;
> > + UINT32 RegEcx;
> > UINT32 RegEdx;
> > UINT8 PhysicalAddressBits;
> > EFI_PHYSICAL_ADDRESS PageAddress;
> > + UINTN IndexOfPml5Entries;
> > UINTN IndexOfPml4Entries;
> > UINTN IndexOfPdpEntries;
> > UINTN IndexOfPageDirectoryEntries;
> > + UINT32 NumberOfPml5EntriesNeeded;
> > UINT32 NumberOfPml4EntriesNeeded;
> > UINT32 NumberOfPdpEntriesNeeded;
> > + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry;
> > PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry;
> > PAGE_MAP_AND_DIRECTORY_POINTER *PageMap;
> > PAGE_MAP_AND_DIRECTORY_POINTER *PageDirectoryPointerEntry;
> > @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables (
> > UINTN TotalPagesNum;
> > UINTN BigPageAddress;
> > VOID *Hob;
> > + BOOLEAN Page5LevelSupport;
> > BOOLEAN Page1GSupport;
> > PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry;
> > UINT64 AddressEncMask;
> > + IA32_CR4 Cr4;
> >
> > //
> > // Make sure AddressEncMask is contained to smallest supported address field
> > @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables (
> > }
> > }
> >
> > + Page5LevelSupport = FALSE;
> > + if (PcdGetBool (PcdUse5LevelPageTable)) {
> > + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx);
> > + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, RegEbx, RegEcx, RegEdx));
> > + if ((RegEcx & BIT16) != 0) {
>
> (1) Would it be possible to use macro names here, for Index and SubIndex
> in AsmCpuidEx(), and a "better" macro name than BIT16, in the RegEcx check?
There are macros which are defined in UefiCpuPkg for the CPUID leaf functions
and structures for the CPUID output data.
But since there is rule that MdeModulePkg cannot depend on UefiCpuPkg so
the code cannot use those macros/structures.
I am thinking of a new rule that UefiCpuPkg and MdeModulePkg can depend on each other.
I haven't talked with Mike on this. Not sure if he is ok on this.
For this case, yes I can temporary define 2 macros: one for 0x7, the other for BIT16.
>
> > + Page5LevelSupport = TRUE;
> > + }
> > + }
> > +
> > + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage = %d/%d/%d\n", PhysicalAddressBits, Page5LevelSupport,
> Page1GSupport));
> > +
>
> (2) Can we format this log message as:
>
> AddressBits=%d 5LevelPaging=%d 1GPage=%d
ok.
>
> ?
>
> > //
> > - // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses.
> > + // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses
> > + // when 5-Level Paging is disabled,
> > + // due to either unsupported by HW, or disabled by PCD.
> > //
> > ASSERT (PhysicalAddressBits <= 52);
> > - if (PhysicalAddressBits > 48) {
> > + if (!Page5LevelSupport && PhysicalAddressBits > 48) {
> > PhysicalAddressBits = 48;
> > }
> >
> > //
> > // Calculate the table entries needed.
> > //
> > - if (PhysicalAddressBits <= 39 ) {
> > - NumberOfPml4EntriesNeeded = 1;
> > - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 30));
> > - } else {
> > - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 39));
> > - NumberOfPdpEntriesNeeded = 512;
> > + NumberOfPml5EntriesNeeded = 1;
> > + if (PhysicalAddressBits > 48) {
> > + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 48);
> > + PhysicalAddressBits = 48;
> > }
> >
> > + NumberOfPml4EntriesNeeded = 1;
> > + if (PhysicalAddressBits > 39) {
> > + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 39);
> > + PhysicalAddressBits = 39;
> > + }
> > +
> > + NumberOfPdpEntriesNeeded = 1;
> > + ASSERT (PhysicalAddressBits > 30);
> > + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 30);
> > +
> > //
> > // Pre-allocate big pages to avoid later allocations.
> > //
> > if (!Page1GSupport) {
> > - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1;
> > + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded
> + 1;
> > } else {
> > - TotalPagesNum = NumberOfPml4EntriesNeeded + 1;
> > + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1;
> > + }
> > +
> > + //
> > + // Substract the one page occupied by PML5 entries if 5-Level Paging is disabled.
> > + //
> > + if (!Page5LevelSupport) {
> > + TotalPagesNum--;
> > }
> > +
> > + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n",
> > + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded,
> > + NumberOfPdpEntriesNeeded, TotalPagesNum));
> > +
>
> (3) Same comment about log message formatting as in (2).
>
> (4) Please log UINT32 values with %u, not %d.
>
> (5) Please log UINTN values with %Lu (and cast them to UINT64 first).
ok to above 3, 4, 5.
>
> (6) More generally, can we please replace this patch with three patches,
> in the series?
>
> - the first patch should extract the TotalPagesNum calculation to a
> separate *library* function (it should be a public function in a BASE or
> PEIM library)
>
> - the second patch should extend the TotalPagesNum calculation in the
> library function to 5-level paging
>
> - the third patch should be the remaining code from the present patch.
>
> Here's why: in OvmfPkg/PlatformPei, the GetPeiMemoryCap() function
> duplicates this calculation. In OVMF, PEI-phase memory allocations can
> be dominated by the page tables built for 64-bit DXE, and so
> OvmfPkg/PlatformPei must know, for sizing the permanent PEI RAM, how
> much RAM the DXE IPL PEIM will need for those page tables.
>
> I would *really* dislike copying this update (for 5-level paging) from
> CreateIdentityMappingPageTables() to GetPeiMemoryCap(). Instead, the
> calculation should be extracted to a library function, and then OVMF
> (and all other platforms affected similarly) could call the new function.
>
> If this is not feasible or very much out of scope, then I guess we'll
> just keep the PCD as FALSE in OVMF, for the time being.
>
BZ https://bugzilla.tianocore.org/show_bug.cgi?id=847 may be what
you need.
Extracting a library API to return how many pages are needed for page tables
needs to consider several cases:
1. 4K or 2M or 1G page size is used.
2. range of physical memory
3. page table level
But right now, this is not in the high priority in my to-do list.
I can help to change GetPeiMemoryCap() for short-term needs.
Are you ok with this?
> I'll let others review the rest of this patch.
>
> Thanks
> Laszlo
>
> > BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum);
> > ASSERT (BigPageAddress != 0);
> >
> > @@ -711,92 +751,125 @@ CreateIdentityMappingPageTables (
> > // By architecture only one PageMapLevel4 exists - so lets allocate storage for it.
> > //
> > PageMap = (VOID *) BigPageAddress;
> > - BigPageAddress += SIZE_4KB;
> > -
> > - PageMapLevel4Entry = PageMap;
> > - PageAddress = 0;
> > - for (IndexOfPml4Entries = 0; IndexOfPml4Entries < NumberOfPml4EntriesNeeded; IndexOfPml4Entries++,
> PageMapLevel4Entry++) {
> > + if (Page5LevelSupport) {
> > //
> > - // Each PML4 entry points to a page of Page Directory Pointer entires.
> > - // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop.
> > + // By architecture only one PageMapLevel5 exists - so lets allocate storage for it.
> > //
> > - PageDirectoryPointerEntry = (VOID *) BigPageAddress;
> > - BigPageAddress += SIZE_4KB;
> > + PageMapLevel5Entry = PageMap;
> > + BigPageAddress += SIZE_4KB;
> > + }
> > + PageAddress = 0;
> >
> > + for ( IndexOfPml5Entries = 0
> > + ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded
> > + ; IndexOfPml5Entries++, PageMapLevel5Entry++) {
> > //
> > - // Make a PML4 Entry
> > + // Each PML5 entry points to a page of PML4 entires.
> > + // So lets allocate space for them and fill them in in the IndexOfPml4Entries loop.
> > + // When 5-Level Paging is disabled, below allocation happens only once.
> > //
> > - PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask;
> > - PageMapLevel4Entry->Bits.ReadWrite = 1;
> > - PageMapLevel4Entry->Bits.Present = 1;
> > + PageMapLevel4Entry = (VOID *) BigPageAddress;
> > + BigPageAddress += SIZE_4KB;
> >
> > - if (Page1GSupport) {
> > - PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry;
> > + if (Page5LevelSupport) {
> > + //
> > + // Make a PML5 Entry
> > + //
> > + PageMapLevel5Entry->Uint64 = (UINT64) (UINTN) PageMapLevel4Entry;
> > + PageMapLevel5Entry->Bits.ReadWrite = 1;
> > + PageMapLevel5Entry->Bits.Present = 1;
> > + }
> >
> > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++,
> PageDirectory1GEntry++, PageAddress += SIZE_1GB) {
> > - if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) {
> > - Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize);
> > - } else {
> > - //
> > - // Fill in the Page Directory entries
> > - //
> > - PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
> > - PageDirectory1GEntry->Bits.ReadWrite = 1;
> > - PageDirectory1GEntry->Bits.Present = 1;
> > - PageDirectory1GEntry->Bits.MustBe1 = 1;
> > - }
> > - }
> > - } else {
> > - for (IndexOfPdpEntries = 0; IndexOfPdpEntries < NumberOfPdpEntriesNeeded; IndexOfPdpEntries++,
> PageDirectoryPointerEntry++) {
> > - //
> > - // Each Directory Pointer entries points to a page of Page Directory entires.
> > - // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop.
> > - //
> > - PageDirectoryEntry = (VOID *) BigPageAddress;
> > - BigPageAddress += SIZE_4KB;
> > + for ( IndexOfPml4Entries = 0
> > + ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512)
> > + ; IndexOfPml4Entries++, PageMapLevel4Entry++) {
> > + //
> > + // Each PML4 entry points to a page of Page Directory Pointer entires.
> > + // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop.
> > + //
> > + PageDirectoryPointerEntry = (VOID *) BigPageAddress;
> > + BigPageAddress += SIZE_4KB;
> >
> > - //
> > - // Fill in a Page Directory Pointer Entries
> > - //
> > - PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask;
> > - PageDirectoryPointerEntry->Bits.ReadWrite = 1;
> > - PageDirectoryPointerEntry->Bits.Present = 1;
> > + //
> > + // Make a PML4 Entry
> > + //
> > + PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask;
> > + PageMapLevel4Entry->Bits.ReadWrite = 1;
> > + PageMapLevel4Entry->Bits.Present = 1;
> >
> > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++,
> PageDirectoryEntry++, PageAddress += SIZE_2MB) {
> > - if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) {
> > - //
> > - // Need to split this 2M page that covers NULL or stack range.
> > - //
> > - Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize);
> > + if (Page1GSupport) {
> > + PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry;
> > +
> > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++,
> PageDirectory1GEntry++, PageAddress += SIZE_1GB) {
> > + if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) {
> > + Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize);
> > } else {
> > //
> > // Fill in the Page Directory entries
> > //
> > - PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
> > - PageDirectoryEntry->Bits.ReadWrite = 1;
> > - PageDirectoryEntry->Bits.Present = 1;
> > - PageDirectoryEntry->Bits.MustBe1 = 1;
> > + PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
> > + PageDirectory1GEntry->Bits.ReadWrite = 1;
> > + PageDirectory1GEntry->Bits.Present = 1;
> > + PageDirectory1GEntry->Bits.MustBe1 = 1;
> > }
> > }
> > - }
> > + } else {
> > + for ( IndexOfPdpEntries = 0
> > + ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512)
> > + ; IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
> > + //
> > + // Each Directory Pointer entries points to a page of Page Directory entires.
> > + // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop.
> > + //
> > + PageDirectoryEntry = (VOID *) BigPageAddress;
> > + BigPageAddress += SIZE_4KB;
> >
> > - for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
> > - ZeroMem (
> > - PageDirectoryPointerEntry,
> > - sizeof(PAGE_MAP_AND_DIRECTORY_POINTER)
> > - );
> > + //
> > + // Fill in a Page Directory Pointer Entries
> > + //
> > + PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask;
> > + PageDirectoryPointerEntry->Bits.ReadWrite = 1;
> > + PageDirectoryPointerEntry->Bits.Present = 1;
> > +
> > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++,
> PageDirectoryEntry++, PageAddress += SIZE_2MB) {
> > + if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) {
> > + //
> > + // Need to split this 2M page that covers NULL or stack range.
> > + //
> > + Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize);
> > + } else {
> > + //
> > + // Fill in the Page Directory entries
> > + //
> > + PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
> > + PageDirectoryEntry->Bits.ReadWrite = 1;
> > + PageDirectoryEntry->Bits.Present = 1;
> > + PageDirectoryEntry->Bits.MustBe1 = 1;
> > + }
> > + }
> > + }
> > +
> > + //
> > + // Fill with null entry for unused PDPTE
> > + //
> > + ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * sizeof(PAGE_MAP_AND_DIRECTORY_POINTER));
> > }
> > }
> > +
> > + //
> > + // For the PML4 entries we are not using fill in a null entry.
> > + //
> > + ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER));
> > }
> >
> > - //
> > - // For the PML4 entries we are not using fill in a null entry.
> > - //
> > - for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, PageMapLevel4Entry++) {
> > - ZeroMem (
> > - PageMapLevel4Entry,
> > - sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)
> > - );
> > + if (Page5LevelSupport) {
> > + Cr4.UintN = AsmReadCr4 ();
> > + Cr4.Bits.LA57 = 1;
> > + AsmWriteCr4 (Cr4.UintN);
> > + //
> > + // For the PML5 entries we are not using fill in a null entry.
> > + //
> > + ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER));
> > }
> >
> > //
> >
>
>
>
next prev parent reply other threads:[~2019-07-23 15:29 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-22 8:15 [PATCH 0/4] Support 5-level paging in DXE long mode Ni, Ray
2019-07-22 8:15 ` [PATCH 1/4] UefiCpuPkg/MpInitLib: Enable 5-level paging for AP when BSP's enabled Ni, Ray
2019-07-23 9:15 ` [edk2-devel] " Laszlo Ersek
2019-07-24 7:02 ` Ni, Ray
2019-07-22 8:15 ` [PATCH 2/4] UefiCpuPkg/CpuDxe: Support parsing 5-level page table Ni, Ray
2019-07-23 9:22 ` [edk2-devel] " Laszlo Ersek
2019-07-24 7:03 ` Ni, Ray
2019-07-25 17:19 ` Laszlo Ersek
2019-07-22 8:15 ` [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable Ni, Ray
2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A
2019-07-23 14:20 ` Ni, Ray
2019-07-24 2:02 ` Dong, Eric
2019-07-22 8:15 ` [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode Ni, Ray
2019-07-23 2:05 ` [edk2-devel] " Wu, Hao A
2019-07-23 7:48 ` Laszlo Ersek
2019-07-23 19:45 ` Singh, Brijesh
2019-07-23 9:46 ` Laszlo Ersek
2019-07-23 15:29 ` Ni, Ray [this message]
2019-07-23 19:20 ` Laszlo Ersek
2019-07-23 23:54 ` Michael D Kinney
2019-07-24 1:40 ` Ni, Ray
[not found] ` <15B3ACB4E8DDF416.7925@groups.io>
2019-07-22 8:28 ` [edk2-devel] [PATCH 3/4] MdeModulePkg/DxeIpl: Introduce PCD PcdUse5LevelPageTable Ni, Ray
[not found] ` <15B3ACB536E52165.29669@groups.io>
2019-07-22 8:28 ` [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode Ni, Ray
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=734D49CCEBEEF84792F5B80ED585239D5C2417DB@SHSMSX104.ccr.corp.intel.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox