public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Gerd Hoffmann" <kraxel@redhat.com>
To: devel@edk2.groups.io
Cc: Jiewen Yao <jiewen.yao@intel.com>,
	Laszlo Ersek <lersek@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Oliver Steffen <osteffen@redhat.com>,
	Ard Biesheuvel <ardb+tianocore@kernel.org>
Subject: [edk2-devel] [PATCH v2 3/4] OvmfPkg/PlatformPei: rewrite page table calculation
Date: Fri,  2 Feb 2024 11:47:19 +0100	[thread overview]
Message-ID: <20240202104720.1275308-4-kraxel@redhat.com> (raw)
In-Reply-To: <20240202104720.1275308-1-kraxel@redhat.com>

Consider 5-level paging.  Simplify calculation to make it easier
to understand.  Add some comments, improve ASSERTs.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 OvmfPkg/PlatformPei/MemDetect.c | 56 ++++++++++++++++++++-------------
 1 file changed, 35 insertions(+), 21 deletions(-)

diff --git a/OvmfPkg/PlatformPei/MemDetect.c b/OvmfPkg/PlatformPei/MemDetect.c
index 83f1c1d02a26..48ad0b83a55e 100644
--- a/OvmfPkg/PlatformPei/MemDetect.c
+++ b/OvmfPkg/PlatformPei/MemDetect.c
@@ -184,9 +184,12 @@ GetPeiMemoryCap (
   BOOLEAN  Page1GSupport;
   UINT32   RegEax;
   UINT32   RegEdx;
-  UINT32   Pml4Entries;
-  UINT32   PdpEntries;
-  UINTN    TotalPages;
+  UINT64   MaxAddr;
+  UINT32   Level5Pages;
+  UINT32   Level4Pages;
+  UINT32   Level3Pages;
+  UINT32   Level2Pages;
+  UINT32   TotalPages;
   UINT64   ApStacks;
   UINT64   MemoryCap;
 
@@ -203,8 +206,7 @@ GetPeiMemoryCap (
   //
   // Dependent on physical address width, PEI memory allocations can be
   // dominated by the page tables built for 64-bit DXE. So we key the cap off
-  // of those. The code below is based on CreateIdentityMappingPageTables() in
-  // "MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c".
+  // of those.
   //
   Page1GSupport = FALSE;
   if (PcdGetBool (PcdUse1GPageTable)) {
@@ -217,25 +219,37 @@ GetPeiMemoryCap (
     }
   }
 
-  if (PlatformInfoHob->PhysMemAddressWidth <= 39) {
-    Pml4Entries = 1;
-    PdpEntries  = 1 << (PlatformInfoHob->PhysMemAddressWidth - 30);
-    ASSERT (PdpEntries <= 0x200);
+  //
+  // - A 4KB page accommodates the least significant 12 bits of the
+  //   virtual address.
+  // - A page table entry at any level consumes 8 bytes, so a 4KB page
+  //   table page (at any level) contains 512 entries, and
+  //   accommodates 9 bits of the virtual address.
+  // - we minimally cover the phys address space with 2MB pages, so
+  //   level 1 never exists.
+  // - If 1G paging is available, then level 2 doesn't exist either.
+  // - Start with level 2, where a page table page accommodates
+  //   9 + 9 + 12 = 30 bits of the virtual address (and covers 1GB of
+  //   physical address space).
+  //
+
+  MaxAddr     = LShiftU64 (1, PlatformInfoHob->PhysMemAddressWidth);
+  Level2Pages = (UINT32)RShiftU64 (MaxAddr, 30);
+  Level3Pages = MAX (Level2Pages >> 9, 1);
+  Level4Pages = MAX (Level3Pages >> 9, 1);
+  Level5Pages = 1;
+
+  if (Page1GSupport) {
+    Level2Pages = 0;
+    TotalPages  = Level5Pages + Level4Pages + Level3Pages;
+    ASSERT (TotalPages <= 0x40201);
   } else {
-    if (PlatformInfoHob->PhysMemAddressWidth > 48) {
-      Pml4Entries = 0x200;
-    } else {
-      Pml4Entries = 1 << (PlatformInfoHob->PhysMemAddressWidth - 39);
-    }
-
-    ASSERT (Pml4Entries <= 0x200);
-    PdpEntries = 512;
+    TotalPages = Level5Pages + Level4Pages + Level3Pages + Level2Pages;
+    // PlatformAddressWidthFromCpuid() caps at 40 phys bits without 1G pages.
+    ASSERT (PlatformInfoHob->PhysMemAddressWidth <= 40);
+    ASSERT (TotalPages <= 0x404);
   }
 
-  TotalPages = Page1GSupport ? Pml4Entries + 1 :
-               (PdpEntries + 1) * Pml4Entries + 1;
-  ASSERT (TotalPages <= 0x40201);
-
   //
   // With 32k stacks and 4096 vcpus this lands at 128 MB (far away
   // from MAX_UINT32).
-- 
2.43.0



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#115039): https://edk2.groups.io/g/devel/message/115039
Mute This Topic: https://groups.io/mt/104117101/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-



  parent reply	other threads:[~2024-02-02 10:47 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-02 10:47 [edk2-devel] [PATCH v2 0/4] OvmfPkg/PlatformPei: scaleability fixes for GetPeiMemoryCap() Gerd Hoffmann
2024-02-02 10:47 ` [edk2-devel] [PATCH v2 1/4] OvmfPkg/PlatformPei: log a warning when memory is tight Gerd Hoffmann
2024-02-05  7:45   ` Laszlo Ersek
2024-02-02 10:47 ` [edk2-devel] [PATCH v2 2/4] OvmfPkg/PlatformPei: consider AP stacks for pei memory cap Gerd Hoffmann
2024-02-05  7:57   ` Laszlo Ersek
2024-02-02 10:47 ` Gerd Hoffmann [this message]
2024-02-05  8:14   ` [edk2-devel] [PATCH v2 3/4] OvmfPkg/PlatformPei: rewrite page table calculation Laszlo Ersek
2024-02-05  8:19     ` Laszlo Ersek
2024-02-14  9:32     ` Gerd Hoffmann
2024-02-14 10:48       ` Laszlo Ersek
2024-02-14 11:07         ` Gerd Hoffmann
2024-02-14 11:58           ` Laszlo Ersek
2024-02-02 10:47 ` [edk2-devel] [PATCH v2 4/4] OvmfPkg/PlatformPei: log pei memory cap details Gerd Hoffmann
2024-02-05  8:27   ` Laszlo Ersek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240202104720.1275308-4-kraxel@redhat.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox