public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "duntan" <dun.tan@intel.com>
To: devel@edk2.groups.io
Cc: Eric Dong <eric.dong@intel.com>, Ray Ni <ray.ni@intel.com>,
	Rahul Kumar <rahul1.kumar@intel.com>,
	Gerd Hoffmann <kraxel@redhat.com>
Subject: [Patch V2 6/8] UefiCpuPkg: Refinement to current smm page table generation code
Date: Wed, 12 Apr 2023 16:53:41 +0800	[thread overview]
Message-ID: <20230412085343.1077-7-dun.tan@intel.com> (raw)
In-Reply-To: <20230412085343.1077-1-dun.tan@intel.com>

This commit is code refinement to current smm pagetable generation
code. Add a new GenSmmPageTable() API to create page table for smm
based on the PageTableMap() API in CpuPageTableLib. Caller only
needs to specify the paging mode and the PhysicalAddressBits to map.
This function can be used to create both IA32 pae paging and X64
5level, 4level paging.

Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
 UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c           |   2 +-
 UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c    |   2 +-
 UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c              | 130 ----------------------------------------------------------------------------------------------------------------------------------
 UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h         |  15 +++++++++------
 UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c |  65 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c            | 218 +++++++++++++++++++++++---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c     |  19 ++-----------------
 7 files changed, 101 insertions(+), 350 deletions(-)

diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
index 9c8107080a..b11264ce4a 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
@@ -63,7 +63,7 @@ SmmInitPageTable (
     InitializeIDTSmmStackGuard ();
   }
 
-  return Gen4GPageTable (TRUE);
+  return GenSmmPageTable (PagingPae, mPhysicalAddressBits);
 }
 
 /**
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c
index bba4a6f058..650090e534 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c
@@ -18,7 +18,7 @@ InitSmmS3Cr3 (
   VOID
   )
 {
-  mSmmS3ResumeState->SmmS3Cr3 = Gen4GPageTable (TRUE);
+  mSmmS3ResumeState->SmmS3Cr3 = GenSmmPageTable (PagingPae, mPhysicalAddressBits);
 
   return;
 }
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
index 1878252eac..f8b81fc96e 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
@@ -999,136 +999,6 @@ APHandler (
   ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
 }
 
-/**
-  Create 4G PageTable in SMRAM.
-
-  @param[in]      Is32BitPageTable Whether the page table is 32-bit PAE
-  @return         PageTable Address
-
-**/
-UINT32
-Gen4GPageTable (
-  IN      BOOLEAN  Is32BitPageTable
-  )
-{
-  VOID    *PageTable;
-  UINTN   Index;
-  UINT64  *Pte;
-  UINTN   PagesNeeded;
-  UINTN   Low2MBoundary;
-  UINTN   High2MBoundary;
-  UINTN   Pages;
-  UINTN   GuardPage;
-  UINT64  *Pdpte;
-  UINTN   PageIndex;
-  UINTN   PageAddress;
-
-  Low2MBoundary  = 0;
-  High2MBoundary = 0;
-  PagesNeeded    = 0;
-  if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
-    //
-    // Add one more page for known good stack, then find the lower 2MB aligned address.
-    //
-    Low2MBoundary = (mSmmStackArrayBase + EFI_PAGE_SIZE) & ~(SIZE_2MB-1);
-    //
-    // Add two more pages for known good stack and stack guard page,
-    // then find the lower 2MB aligned address.
-    //
-    High2MBoundary = (mSmmStackArrayEnd - mSmmStackSize - mSmmShadowStackSize + EFI_PAGE_SIZE * 2) & ~(SIZE_2MB-1);
-    PagesNeeded    = ((High2MBoundary - Low2MBoundary) / SIZE_2MB) + 1;
-  }
-
-  //
-  // Allocate the page table
-  //
-  PageTable = AllocatePageTableMemory (5 + PagesNeeded);
-  ASSERT (PageTable != NULL);
-
-  PageTable = (VOID *)((UINTN)PageTable);
-  Pte       = (UINT64 *)PageTable;
-
-  //
-  // Zero out all page table entries first
-  //
-  ZeroMem (Pte, EFI_PAGES_TO_SIZE (1));
-
-  //
-  // Set Page Directory Pointers
-  //
-  for (Index = 0; Index < 4; Index++) {
-    Pte[Index] = ((UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1)) | mAddressEncMask |
-                 (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS : PAGE_ATTRIBUTE_BITS);
-  }
-
-  Pte += EFI_PAGE_SIZE / sizeof (*Pte);
-
-  //
-  // Fill in Page Directory Entries
-  //
-  for (Index = 0; Index < EFI_PAGE_SIZE * 4 / sizeof (*Pte); Index++) {
-    Pte[Index] = (Index << 21) | mAddressEncMask | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
-  }
-
-  Pdpte = (UINT64 *)PageTable;
-  if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
-    Pages     = (UINTN)PageTable + EFI_PAGES_TO_SIZE (5);
-    GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE;
-    for (PageIndex = Low2MBoundary; PageIndex <= High2MBoundary; PageIndex += SIZE_2MB) {
-      Pte                                             = (UINT64 *)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30, 31)] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
-      Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
-      //
-      // Fill in Page Table Entries
-      //
-      Pte         = (UINT64 *)Pages;
-      PageAddress = PageIndex;
-      for (Index = 0; Index < EFI_PAGE_SIZE / sizeof (*Pte); Index++) {
-        if (PageAddress == GuardPage) {
-          //
-          // Mark the guard page as non-present
-          //
-          Pte[Index] = PageAddress | mAddressEncMask;
-          GuardPage += (mSmmStackSize + mSmmShadowStackSize);
-          if (GuardPage > mSmmStackArrayEnd) {
-            GuardPage = 0;
-          }
-        } else {
-          Pte[Index] = PageAddress | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
-        }
-
-        PageAddress += EFI_PAGE_SIZE;
-      }
-
-      Pages += EFI_PAGE_SIZE;
-    }
-  }
-
-  if ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) {
-    Pte = (UINT64 *)(UINTN)(Pdpte[0] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
-    if ((Pte[0] & IA32_PG_PS) == 0) {
-      // 4K-page entries are already mapped. Just hide the first one anyway.
-      Pte     = (UINT64 *)(UINTN)(Pte[0] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
-      Pte[0] &= ~(UINT64)IA32_PG_P; // Hide page 0
-    } else {
-      // Create 4K-page entries
-      Pages = (UINTN)AllocatePageTableMemory (1);
-      ASSERT (Pages != 0);
-
-      Pte[0] = (UINT64)(Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS);
-
-      Pte         = (UINT64 *)Pages;
-      PageAddress = 0;
-      Pte[0]      = PageAddress | mAddressEncMask; // Hide page 0 but present left
-      for (Index = 1; Index < EFI_PAGE_SIZE / sizeof (*Pte); Index++) {
-        PageAddress += EFI_PAGE_SIZE;
-        Pte[Index]   = PageAddress | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
-      }
-    }
-  }
-
-  return (UINT32)(UINTN)PageTable;
-}
-
 /**
   Checks whether the input token is the current used token.
 
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index a155e09200..b72c883fc5 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -542,15 +542,18 @@ extern UINT8                         mPhysicalAddressBits;
 extern UINT64  mAddressEncMask;
 
 /**
-  Create 4G PageTable in SMRAM.
+  Create page table based on input PagingMode and PhysicalAddressBits in smm.
 
-  @param[in]      Is32BitPageTable Whether the page table is 32-bit PAE
-  @return         PageTable Address
+  @param[in]      PagingMode           The paging mode.
+  @param[in]      PhysicalAddressBits  The bits of physical address to map.
+
+  @retval         PageTable Address
 
 **/
-UINT32
-Gen4GPageTable (
-  IN      BOOLEAN  Is32BitPageTable
+UINTN
+GenSmmPageTable (
+  IN PAGING_MODE  PagingMode,
+  IN UINT8        PhysicalAddressBits
   );
 
 /**
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index 89040d386e..5b970157c6 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -1549,6 +1549,71 @@ EdkiiSmmClearMemoryAttributes (
   return SmmClearMemoryAttributes (BaseAddress, Length, Attributes);
 }
 
+/**
+  Create page table based on input PagingMode and PhysicalAddressBits in smm.
+
+  @param[in]      PagingMode           The paging mode.
+  @param[in]      PhysicalAddressBits  The bits of physical address to map.
+
+  @retval         PageTable Address
+
+**/
+UINTN
+GenSmmPageTable (
+  IN PAGING_MODE  PagingMode,
+  IN UINT8        PhysicalAddressBits
+  )
+{
+  UINTN               PageTableBufferSize;
+  UINTN               PageTable;
+  VOID                *PageTableBuffer;
+  IA32_MAP_ATTRIBUTE  MapAttribute;
+  IA32_MAP_ATTRIBUTE  MapMask;
+  RETURN_STATUS       Status;
+  UINTN               GuardPage;
+  UINTN               Index;
+  UINT64              Length;
+
+  Length                           = LShiftU64 (1, PhysicalAddressBits);
+  PageTable                        = 0;
+  PageTableBufferSize              = 0;
+  MapMask.Uint64                   = MAX_UINT64;
+  MapAttribute.Uint64              = mAddressEncMask;
+  MapAttribute.Bits.Present        = 1;
+  MapAttribute.Bits.ReadWrite      = 1;
+  MapAttribute.Bits.UserSupervisor = 1;
+  MapAttribute.Bits.Accessed       = 1;
+  MapAttribute.Bits.Dirty          = 1;
+
+  Status = PageTableMap (&PageTable, PagingMode, NULL, &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
+  ASSERT (Status == RETURN_BUFFER_TOO_SMALL);
+  DEBUG ((DEBUG_INFO, "GenSMMPageTable: 0x%x bytes needed for initial SMM page table\n", PageTableBufferSize));
+  PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES (PageTableBufferSize));
+  ASSERT (PageTableBuffer != NULL);
+  Status = PageTableMap (&PageTable, PagingMode, PageTableBuffer, &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
+  ASSERT (Status == RETURN_SUCCESS);
+  ASSERT (PageTableBufferSize == 0);
+
+  if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
+    //
+    // Mark the guard page at the bottom of smm stack as non-present
+    //
+    for (Index = 0; Index < gSmmCpuPrivate->SmmCoreEntryContext.NumberOfCpus; Index++) {
+      GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE + Index * (mSmmStackSize + mSmmShadowStackSize);
+      Status    = ConvertMemoryPageAttributes (PageTable, PagingMode, GuardPage, SIZE_4KB, EFI_MEMORY_RP, TRUE, NULL);
+    }
+  }
+
+  if ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) {
+    //
+    // Mark [0, 4k] as non-present
+    //
+    Status = ConvertMemoryPageAttributes (PageTable, PagingMode, 0, SIZE_4KB, EFI_MEMORY_RP, TRUE, NULL);
+  }
+
+  return (UINTN)PageTable;
+}
+
 /**
   This function retrieves the attributes of the memory region specified by
   BaseAddress and Length. If different attributes are got from different part
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
index 25ced50955..cdbf52ae77 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
@@ -167,160 +167,6 @@ CalculateMaximumSupportAddress (
   return PhysicalAddressBits;
 }
 
-/**
-  Set static page table.
-
-  @param[in] PageTable              Address of page table.
-  @param[in] PhysicalAddressBits    The maximum physical address bits supported.
-**/
-VOID
-SetStaticPageTable (
-  IN UINTN  PageTable,
-  IN UINT8  PhysicalAddressBits
-  )
-{
-  UINT64  PageAddress;
-  UINTN   NumberOfPml5EntriesNeeded;
-  UINTN   NumberOfPml4EntriesNeeded;
-  UINTN   NumberOfPdpEntriesNeeded;
-  UINTN   IndexOfPml5Entries;
-  UINTN   IndexOfPml4Entries;
-  UINTN   IndexOfPdpEntries;
-  UINTN   IndexOfPageDirectoryEntries;
-  UINT64  *PageMapLevel5Entry;
-  UINT64  *PageMapLevel4Entry;
-  UINT64  *PageMap;
-  UINT64  *PageDirectoryPointerEntry;
-  UINT64  *PageDirectory1GEntry;
-  UINT64  *PageDirectoryEntry;
-
-  //
-  // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses
-  //  when 5-Level Paging is disabled.
-  //
-  ASSERT (PhysicalAddressBits <= 52);
-  if (!m5LevelPagingNeeded && (PhysicalAddressBits > 48)) {
-    PhysicalAddressBits = 48;
-  }
-
-  NumberOfPml5EntriesNeeded = 1;
-  if (PhysicalAddressBits > 48) {
-    NumberOfPml5EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits - 48);
-    PhysicalAddressBits       = 48;
-  }
-
-  NumberOfPml4EntriesNeeded = 1;
-  if (PhysicalAddressBits > 39) {
-    NumberOfPml4EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits - 39);
-    PhysicalAddressBits       = 39;
-  }
-
-  NumberOfPdpEntriesNeeded = 1;
-  ASSERT (PhysicalAddressBits > 30);
-  NumberOfPdpEntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits - 30);
-
-  //
-  // By architecture only one PageMapLevel4 exists - so lets allocate storage for it.
-  //
-  PageMap = (VOID *)PageTable;
-
-  PageMapLevel4Entry = PageMap;
-  PageMapLevel5Entry = NULL;
-  if (m5LevelPagingNeeded) {
-    //
-    // By architecture only one PageMapLevel5 exists - so lets allocate storage for it.
-    //
-    PageMapLevel5Entry = PageMap;
-  }
-
-  PageAddress = 0;
-
-  for ( IndexOfPml5Entries = 0
-        ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded
-        ; IndexOfPml5Entries++, PageMapLevel5Entry++)
-  {
-    //
-    // Each PML5 entry points to a page of PML4 entires.
-    // So lets allocate space for them and fill them in in the IndexOfPml4Entries loop.
-    // When 5-Level Paging is disabled, below allocation happens only once.
-    //
-    if (m5LevelPagingNeeded) {
-      PageMapLevel4Entry = (UINT64 *)((*PageMapLevel5Entry) & ~mAddressEncMask & gPhyMask);
-      if (PageMapLevel4Entry == NULL) {
-        PageMapLevel4Entry = AllocatePageTableMemory (1);
-        ASSERT (PageMapLevel4Entry != NULL);
-        ZeroMem (PageMapLevel4Entry, EFI_PAGES_TO_SIZE (1));
-
-        *PageMapLevel5Entry = (UINT64)(UINTN)PageMapLevel4Entry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
-      }
-    }
-
-    for (IndexOfPml4Entries = 0; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512); IndexOfPml4Entries++, PageMapLevel4Entry++) {
-      //
-      // Each PML4 entry points to a page of Page Directory Pointer entries.
-      //
-      PageDirectoryPointerEntry = (UINT64 *)((*PageMapLevel4Entry) & ~mAddressEncMask & gPhyMask);
-      if (PageDirectoryPointerEntry == NULL) {
-        PageDirectoryPointerEntry = AllocatePageTableMemory (1);
-        ASSERT (PageDirectoryPointerEntry != NULL);
-        ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE (1));
-
-        *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
-      }
-
-      if (m1GPageTableSupport) {
-        PageDirectory1GEntry = PageDirectoryPointerEntry;
-        for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += SIZE_1GB) {
-          if ((IndexOfPml4Entries == 0) && (IndexOfPageDirectoryEntries < 4)) {
-            //
-            // Skip the < 4G entries
-            //
-            continue;
-          }
-
-          //
-          // Fill in the Page Directory entries
-          //
-          *PageDirectory1GEntry = PageAddress | mAddressEncMask | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
-        }
-      } else {
-        PageAddress = BASE_4GB;
-        for (IndexOfPdpEntries = 0; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512); IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
-          if ((IndexOfPml4Entries == 0) && (IndexOfPdpEntries < 4)) {
-            //
-            // Skip the < 4G entries
-            //
-            continue;
-          }
-
-          //
-          // Each Directory Pointer entries points to a page of Page Directory entires.
-          // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop.
-          //
-          PageDirectoryEntry = (UINT64 *)((*PageDirectoryPointerEntry) & ~mAddressEncMask & gPhyMask);
-          if (PageDirectoryEntry == NULL) {
-            PageDirectoryEntry = AllocatePageTableMemory (1);
-            ASSERT (PageDirectoryEntry != NULL);
-            ZeroMem (PageDirectoryEntry, EFI_PAGES_TO_SIZE (1));
-
-            //
-            // Fill in a Page Directory Pointer Entries
-            //
-            *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
-          }
-
-          for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) {
-            //
-            // Fill in the Page Directory entries
-            //
-            *PageDirectoryEntry = PageAddress | mAddressEncMask | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
-          }
-        }
-      }
-    }
-  }
-}
-
 /**
   Create PageTable for SMM use.
 
@@ -332,15 +178,16 @@ SmmInitPageTable (
   VOID
   )
 {
-  EFI_PHYSICAL_ADDRESS      Pages;
-  UINT64                    *PTEntry;
+  UINTN                     PageTable;
   LIST_ENTRY                *FreePage;
   UINTN                     Index;
   UINTN                     PageFaultHandlerHookAddress;
   IA32_IDT_GATE_DESCRIPTOR  *IdtEntry;
   EFI_STATUS                Status;
+  UINT64                    *Pml3Entry;
   UINT64                    *Pml4Entry;
   UINT64                    *Pml5Entry;
+  UINT8                     PhysicalAddressBits;
 
   //
   // Initialize spin lock
@@ -357,59 +204,40 @@ SmmInitPageTable (
   } else {
     mPagingMode = m1GPageTableSupport ? Paging4Level1GB : Paging4Level;
   }
+
   DEBUG ((DEBUG_INFO, "5LevelPaging Needed             - %d\n", m5LevelPagingNeeded));
   DEBUG ((DEBUG_INFO, "1GPageTable Support             - %d\n", m1GPageTableSupport));
   DEBUG ((DEBUG_INFO, "PcdCpuSmmRestrictedMemoryAccess - %d\n", mCpuSmmRestrictedMemoryAccess));
   DEBUG ((DEBUG_INFO, "PhysicalAddressBits             - %d\n", mPhysicalAddressBits));
-  //
-  // Generate PAE page table for the first 4GB memory space
-  //
-  Pages = Gen4GPageTable (FALSE);
 
   //
-  // Set IA32_PG_PMNT bit to mask this entry
+  // Generate initial SMM page table
   //
-  PTEntry = (UINT64 *)(UINTN)Pages;
-  for (Index = 0; Index < 4; Index++) {
-    PTEntry[Index] |= IA32_PG_PMNT;
-  }
+  PhysicalAddressBits = mCpuSmmRestrictedMemoryAccess ? mPhysicalAddressBits : 32;
+  PageTable           = GenSmmPageTable (mPagingMode, PhysicalAddressBits);
 
-  //
-  // Fill Page-Table-Level4 (PML4) entry
-  //
-  Pml4Entry = (UINT64 *)AllocatePageTableMemory (1);
-  ASSERT (Pml4Entry != NULL);
-  *Pml4Entry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
-  ZeroMem (Pml4Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml4Entry));
+  if (m5LevelPagingNeeded) {
+    Pml5Entry = (UINT64 *)PageTable;
+    SetSubEntriesNum (Pml5Entry, 1);
+    Pml4Entry = (UINT64 *)((*Pml5Entry) & ~mAddressEncMask & gPhyMask);
+  } else {
+    Pml4Entry = (UINT64 *)PageTable;
+  }
 
   //
-  // Set sub-entries number
+  // Set IA32_PG_PMNT bit to mask first 4 Pml3Entry entry
   //
-  SetSubEntriesNum (Pml4Entry, 3);
-  PTEntry = Pml4Entry;
-
-  if (m5LevelPagingNeeded) {
-    //
-    // Fill PML5 entry
-    //
-    Pml5Entry = (UINT64 *)AllocatePageTableMemory (1);
-    ASSERT (Pml5Entry != NULL);
-    *Pml5Entry = (UINTN)Pml4Entry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
-    ZeroMem (Pml5Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml5Entry));
-    //
-    // Set sub-entries number
-    //
-    SetSubEntriesNum (Pml5Entry, 1);
-    PTEntry = Pml5Entry;
+  Pml3Entry = (UINT64 *)((*Pml4Entry) & ~mAddressEncMask & gPhyMask);
+  for (Index = 0; Index < 4; Index++) {
+    Pml3Entry[Index] |= IA32_PG_PMNT;
   }
 
-  if (mCpuSmmRestrictedMemoryAccess) {
+  if (!mCpuSmmRestrictedMemoryAccess) {
     //
-    // When access to non-SMRAM memory is restricted, create page table
-    // that covers all memory space.
+    // Set Pml4Entry sub-entries number
     //
-    SetStaticPageTable ((UINTN)PTEntry, mPhysicalAddressBits);
-  } else {
+    SetSubEntriesNum (Pml4Entry, 3);
+
     //
     // Add pages to page pool
     //
@@ -466,7 +294,7 @@ SmmInitPageTable (
   //
   // Return the address of PML4/PML5 (to set CR3)
   //
-  return (UINT32)(UINTN)PTEntry;
+  return (UINT32)PageTable;
 }
 
 /**
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
index cb7a691745..0805b2e780 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
@@ -35,26 +35,11 @@ InitSmmS3Cr3 (
   VOID
   )
 {
-  EFI_PHYSICAL_ADDRESS  Pages;
-  UINT64                *PTEntry;
-
-  //
-  // Generate PAE page table for the first 4GB memory space
-  //
-  Pages = Gen4GPageTable (FALSE);
-
-  //
-  // Fill Page-Table-Level4 (PML4) entry
-  //
-  PTEntry = (UINT64 *)AllocatePageTableMemory (1);
-  ASSERT (PTEntry != NULL);
-  *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
-  ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
-
   //
+  // Generate level4 page table for the first 4GB memory space
   // Return the address of PML4 (to set CR3)
   //
-  mSmmS3ResumeState->SmmS3Cr3 = (UINT32)(UINTN)PTEntry;
+  mSmmS3ResumeState->SmmS3Cr3 = (UINT32)GenSmmPageTable (Paging4Level, 32);
 
   return;
 }
-- 
2.39.1.windows.1


  parent reply	other threads:[~2023-04-12  8:56 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-12  8:53 [Patch V2 0/8] Use CpuPageTableLib to create and update smm page table duntan
2023-04-12  8:53 ` [Patch V2 1/8] OvmfPkg: Add CpuPageTableLib required by PiSmmCpuDxe duntan
2023-04-13  6:42   ` Gerd Hoffmann
2023-04-12  8:53 ` [Patch V2 2/8] UefiPayloadPkg: " duntan
2023-04-12  8:53 ` [Patch V2 3/8] UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute duntan
2023-04-12  8:53 ` [Patch V2 4/8] UefiCpuPkg/PiSmmCpuDxeSmm: Avoid setting non-present range to RO/NX duntan
2023-04-12  8:53 ` [Patch V2 5/8] UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h duntan
2023-04-12  8:53 ` duntan [this message]
2023-04-12  8:53 ` [Patch V2 7/8] UefiCpuPkg: Refinement to code about updating smm page table duntan
2023-04-12  8:53 ` [Patch V2 8/8] UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary function duntan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230412085343.1077-7-dun.tan@intel.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox