From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: mx.groups.io; dkim=missing; spf=pass (domain: redhat.com, ip: 209.132.183.28, mailfrom: lersek@redhat.com) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by groups.io with SMTP; Tue, 23 Jul 2019 02:46:15 -0700 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C668785360; Tue, 23 Jul 2019 09:46:14 +0000 (UTC) Received: from lacos-laptop-7.usersys.redhat.com (ovpn-117-52.ams2.redhat.com [10.36.117.52]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9DE6E5B684; Tue, 23 Jul 2019 09:46:13 +0000 (UTC) Subject: Re: [edk2-devel] [PATCH 4/4] MdeModulePkg/DxeIpl: Create 5-level page table for long mode To: devel@edk2.groups.io, ray.ni@intel.com Cc: Eric Dong References: <20190722081547.56100-1-ray.ni@intel.com> <20190722081547.56100-5-ray.ni@intel.com> From: "Laszlo Ersek" Message-ID: Date: Tue, 23 Jul 2019 11:46:12 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20190722081547.56100-5-ray.ni@intel.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Tue, 23 Jul 2019 09:46:14 +0000 (UTC) Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit On 07/22/19 10:15, Ni, Ray wrote: > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2008 > > DxeIpl is responsible to create page table for DXE phase running > either in long mode or in 32bit mode with certain protection > mechanism enabled (refer to ToBuildPageTable()). > > The patch updates DxeIpl to create 5-level page table for DXE phase > running in long mode when PcdUse5LevelPageTable is TRUE and CPU > supports 5-level page table. > > Signed-off-by: Ray Ni > Cc: Eric Dong > Cc: Laszlo Ersek > --- > MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 1 + > .../Core/DxeIplPeim/X64/VirtualMemory.c | 227 ++++++++++++------ > 2 files changed, 151 insertions(+), 77 deletions(-) > > diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > index abc3217b01..98bc17fc9d 100644 > --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf > @@ -110,6 +110,7 @@ [Pcd.IA32,Pcd.X64] > gEfiMdeModulePkgTokenSpaceGuid.PcdNullPointerDetectionPropertyMask ## CONSUMES > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask ## CONSUMES > gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard ## CONSUMES > + gEfiMdeModulePkgTokenSpaceGuid.PcdUse5LevelPageTable ## SOMETIMES_CONSUMES > > [Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64] > gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## SOMETIMES_CONSUMES > diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > index edc38e4525..a5bcdcc734 100644 > --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c > @@ -15,7 +15,7 @@ > 2) IA-32 Intel(R) Architecture Software Developer's Manual Volume 2:Instruction Set Reference, Intel > 3) IA-32 Intel(R) Architecture Software Developer's Manual Volume 3:System Programmer's Guide, Intel > > -Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.
> +Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.
> Copyright (c) 2017, AMD Incorporated. All rights reserved.
> > SPDX-License-Identifier: BSD-2-Clause-Patent > @@ -626,14 +626,19 @@ CreateIdentityMappingPageTables ( > ) > { > UINT32 RegEax; > + UINT32 RegEbx; > + UINT32 RegEcx; > UINT32 RegEdx; > UINT8 PhysicalAddressBits; > EFI_PHYSICAL_ADDRESS PageAddress; > + UINTN IndexOfPml5Entries; > UINTN IndexOfPml4Entries; > UINTN IndexOfPdpEntries; > UINTN IndexOfPageDirectoryEntries; > + UINT32 NumberOfPml5EntriesNeeded; > UINT32 NumberOfPml4EntriesNeeded; > UINT32 NumberOfPdpEntriesNeeded; > + PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry; > PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry; > PAGE_MAP_AND_DIRECTORY_POINTER *PageMap; > PAGE_MAP_AND_DIRECTORY_POINTER *PageDirectoryPointerEntry; > @@ -641,9 +646,11 @@ CreateIdentityMappingPageTables ( > UINTN TotalPagesNum; > UINTN BigPageAddress; > VOID *Hob; > + BOOLEAN Page5LevelSupport; > BOOLEAN Page1GSupport; > PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry; > UINT64 AddressEncMask; > + IA32_CR4 Cr4; > > // > // Make sure AddressEncMask is contained to smallest supported address field > @@ -677,33 +684,66 @@ CreateIdentityMappingPageTables ( > } > } > > + Page5LevelSupport = FALSE; > + if (PcdGetBool (PcdUse5LevelPageTable)) { > + AsmCpuidEx (0x7, 0, &RegEax, &RegEbx, &RegEcx, &RegEdx); > + DEBUG ((DEBUG_INFO, "Cpuid(7/0): %08x/%08x/%08x/%08x\n", RegEax, RegEbx, RegEcx, RegEdx)); > + if ((RegEcx & BIT16) != 0) { (1) Would it be possible to use macro names here, for Index and SubIndex in AsmCpuidEx(), and a "better" macro name than BIT16, in the RegEcx check? > + Page5LevelSupport = TRUE; > + } > + } > + > + DEBUG ((DEBUG_INFO, "AddressBits/5LevelPaging/1GPage = %d/%d/%d\n", PhysicalAddressBits, Page5LevelSupport, Page1GSupport)); > + (2) Can we format this log message as: AddressBits=%d 5LevelPaging=%d 1GPage=%d ? > // > - // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses. > + // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses > + // when 5-Level Paging is disabled, > + // due to either unsupported by HW, or disabled by PCD. > // > ASSERT (PhysicalAddressBits <= 52); > - if (PhysicalAddressBits > 48) { > + if (!Page5LevelSupport && PhysicalAddressBits > 48) { > PhysicalAddressBits = 48; > } > > // > // Calculate the table entries needed. > // > - if (PhysicalAddressBits <= 39 ) { > - NumberOfPml4EntriesNeeded = 1; > - NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 30)); > - } else { > - NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 39)); > - NumberOfPdpEntriesNeeded = 512; > + NumberOfPml5EntriesNeeded = 1; > + if (PhysicalAddressBits > 48) { > + NumberOfPml5EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 48); > + PhysicalAddressBits = 48; > } > > + NumberOfPml4EntriesNeeded = 1; > + if (PhysicalAddressBits > 39) { > + NumberOfPml4EntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 39); > + PhysicalAddressBits = 39; > + } > + > + NumberOfPdpEntriesNeeded = 1; > + ASSERT (PhysicalAddressBits > 30); > + NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, PhysicalAddressBits - 30); > + > // > // Pre-allocate big pages to avoid later allocations. > // > if (!Page1GSupport) { > - TotalPagesNum = (NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1; > + TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; > } else { > - TotalPagesNum = NumberOfPml4EntriesNeeded + 1; > + TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1; > + } > + > + // > + // Substract the one page occupied by PML5 entries if 5-Level Paging is disabled. > + // > + if (!Page5LevelSupport) { > + TotalPagesNum--; > } > + > + DEBUG ((DEBUG_INFO, "Pml5/Pml4/Pdp/TotalPage = %d/%d/%d/%d\n", > + NumberOfPml5EntriesNeeded, NumberOfPml4EntriesNeeded, > + NumberOfPdpEntriesNeeded, TotalPagesNum)); > + (3) Same comment about log message formatting as in (2). (4) Please log UINT32 values with %u, not %d. (5) Please log UINTN values with %Lu (and cast them to UINT64 first). (6) More generally, can we please replace this patch with three patches, in the series? - the first patch should extract the TotalPagesNum calculation to a separate *library* function (it should be a public function in a BASE or PEIM library) - the second patch should extend the TotalPagesNum calculation in the library function to 5-level paging - the third patch should be the remaining code from the present patch. Here's why: in OvmfPkg/PlatformPei, the GetPeiMemoryCap() function duplicates this calculation. In OVMF, PEI-phase memory allocations can be dominated by the page tables built for 64-bit DXE, and so OvmfPkg/PlatformPei must know, for sizing the permanent PEI RAM, how much RAM the DXE IPL PEIM will need for those page tables. I would *really* dislike copying this update (for 5-level paging) from CreateIdentityMappingPageTables() to GetPeiMemoryCap(). Instead, the calculation should be extracted to a library function, and then OVMF (and all other platforms affected similarly) could call the new function. If this is not feasible or very much out of scope, then I guess we'll just keep the PCD as FALSE in OVMF, for the time being. I'll let others review the rest of this patch. Thanks Laszlo > BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum); > ASSERT (BigPageAddress != 0); > > @@ -711,92 +751,125 @@ CreateIdentityMappingPageTables ( > // By architecture only one PageMapLevel4 exists - so lets allocate storage for it. > // > PageMap = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > - > - PageMapLevel4Entry = PageMap; > - PageAddress = 0; > - for (IndexOfPml4Entries = 0; IndexOfPml4Entries < NumberOfPml4EntriesNeeded; IndexOfPml4Entries++, PageMapLevel4Entry++) { > + if (Page5LevelSupport) { > // > - // Each PML4 entry points to a page of Page Directory Pointer entires. > - // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop. > + // By architecture only one PageMapLevel5 exists - so lets allocate storage for it. > // > - PageDirectoryPointerEntry = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > + PageMapLevel5Entry = PageMap; > + BigPageAddress += SIZE_4KB; > + } > + PageAddress = 0; > > + for ( IndexOfPml5Entries = 0 > + ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded > + ; IndexOfPml5Entries++, PageMapLevel5Entry++) { > // > - // Make a PML4 Entry > + // Each PML5 entry points to a page of PML4 entires. > + // So lets allocate space for them and fill them in in the IndexOfPml4Entries loop. > + // When 5-Level Paging is disabled, below allocation happens only once. > // > - PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > - PageMapLevel4Entry->Bits.ReadWrite = 1; > - PageMapLevel4Entry->Bits.Present = 1; > + PageMapLevel4Entry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - if (Page1GSupport) { > - PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > + if (Page5LevelSupport) { > + // > + // Make a PML5 Entry > + // > + PageMapLevel5Entry->Uint64 = (UINT64) (UINTN) PageMapLevel4Entry; > + PageMapLevel5Entry->Bits.ReadWrite = 1; > + PageMapLevel5Entry->Bits.Present = 1; > + } > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += SIZE_1GB) { > - if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > - Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize); > - } else { > - // > - // Fill in the Page Directory entries > - // > - PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > - PageDirectory1GEntry->Bits.ReadWrite = 1; > - PageDirectory1GEntry->Bits.Present = 1; > - PageDirectory1GEntry->Bits.MustBe1 = 1; > - } > - } > - } else { > - for (IndexOfPdpEntries = 0; IndexOfPdpEntries < NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > - // > - // Each Directory Pointer entries points to a page of Page Directory entires. > - // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop. > - // > - PageDirectoryEntry = (VOID *) BigPageAddress; > - BigPageAddress += SIZE_4KB; > + for ( IndexOfPml4Entries = 0 > + ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512) > + ; IndexOfPml4Entries++, PageMapLevel4Entry++) { > + // > + // Each PML4 entry points to a page of Page Directory Pointer entires. > + // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop. > + // > + PageDirectoryPointerEntry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - // > - // Fill in a Page Directory Pointer Entries > - // > - PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > - PageDirectoryPointerEntry->Bits.ReadWrite = 1; > - PageDirectoryPointerEntry->Bits.Present = 1; > + // > + // Make a PML4 Entry > + // > + PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask; > + PageMapLevel4Entry->Bits.ReadWrite = 1; > + PageMapLevel4Entry->Bits.Present = 1; > > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) { > - if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > - // > - // Need to split this 2M page that covers NULL or stack range. > - // > - Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize); > + if (Page1GSupport) { > + PageDirectory1GEntry = (VOID *) PageDirectoryPointerEntry; > + > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += SIZE_1GB) { > + if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize)) { > + Split1GPageTo2M (PageAddress, (UINT64 *) PageDirectory1GEntry, StackBase, StackSize); > } else { > // > // Fill in the Page Directory entries > // > - PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > - PageDirectoryEntry->Bits.ReadWrite = 1; > - PageDirectoryEntry->Bits.Present = 1; > - PageDirectoryEntry->Bits.MustBe1 = 1; > + PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > + PageDirectory1GEntry->Bits.ReadWrite = 1; > + PageDirectory1GEntry->Bits.Present = 1; > + PageDirectory1GEntry->Bits.MustBe1 = 1; > } > } > - } > + } else { > + for ( IndexOfPdpEntries = 0 > + ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512) > + ; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > + // > + // Each Directory Pointer entries points to a page of Page Directory entires. > + // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop. > + // > + PageDirectoryEntry = (VOID *) BigPageAddress; > + BigPageAddress += SIZE_4KB; > > - for (; IndexOfPdpEntries < 512; IndexOfPdpEntries++, PageDirectoryPointerEntry++) { > - ZeroMem ( > - PageDirectoryPointerEntry, > - sizeof(PAGE_MAP_AND_DIRECTORY_POINTER) > - ); > + // > + // Fill in a Page Directory Pointer Entries > + // > + PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask; > + PageDirectoryPointerEntry->Bits.ReadWrite = 1; > + PageDirectoryPointerEntry->Bits.Present = 1; > + > + for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) { > + if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize)) { > + // > + // Need to split this 2M page that covers NULL or stack range. > + // > + Split2MPageTo4K (PageAddress, (UINT64 *) PageDirectoryEntry, StackBase, StackSize); > + } else { > + // > + // Fill in the Page Directory entries > + // > + PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask; > + PageDirectoryEntry->Bits.ReadWrite = 1; > + PageDirectoryEntry->Bits.Present = 1; > + PageDirectoryEntry->Bits.MustBe1 = 1; > + } > + } > + } > + > + // > + // Fill with null entry for unused PDPTE > + // > + ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * sizeof(PAGE_MAP_AND_DIRECTORY_POINTER)); > } > } > + > + // > + // For the PML4 entries we are not using fill in a null entry. > + // > + ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)); > } > > - // > - // For the PML4 entries we are not using fill in a null entry. > - // > - for (; IndexOfPml4Entries < 512; IndexOfPml4Entries++, PageMapLevel4Entry++) { > - ZeroMem ( > - PageMapLevel4Entry, > - sizeof (PAGE_MAP_AND_DIRECTORY_POINTER) > - ); > + if (Page5LevelSupport) { > + Cr4.UintN = AsmReadCr4 (); > + Cr4.Bits.LA57 = 1; > + AsmWriteCr4 (Cr4.UintN); > + // > + // For the PML5 entries we are not using fill in a null entry. > + // > + ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)); > } > > // >