* [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table
@ 2023-05-16 9:59 duntan
2023-05-16 9:59 ` [Patch V4 01/15] OvmfPkg: Add CpuPageTableLib required by PiSmmCpuDxe duntan
` (14 more replies)
0 siblings, 15 replies; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel
In V4 patch set:
1.Add a new patch 'MdeModulePkg: Remove RO and NX protection when unset guard page'
2.Add a new patch 'UefiCpuPkg/PiSmmCpuDxeSmm: Clear CR0.WP before modify page table', the assumption that smm page table is always RW should be dropped.
3.Add DEBUG_CODE in 'UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute.' to indicate that when mapping a range to present and EFI_MEMORY_RO or EFI_MEMORY_XP is not specificed, existing Present range in input range is set to NX disable and ReadOnly.
4.Sort mSmmCpuSmramRanges and mProtectionMemRange in seperate patches.
5.Seperate patches for creating smm runtime page table and smm s3 page table.
Dun Tan (15):
OvmfPkg: Add CpuPageTableLib required by PiSmmCpuDxe
UefiPayloadPkg: Add CpuPageTableLib required by PiSmmCpuDxe
OvmfPkg:Remove code that apply AddressEncMask to non-leaf entry
MdeModulePkg: Remove RO and NX protection when unset guard page
UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute.
UefiCpuPkg/PiSmmCpuDxeSmm: Avoid setting non-present range to RO/NX
UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
UefiCpuPkg/PiSmmCpuDxeSmm: Clear CR0.WP before modify page table
UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h
UefiCpuPkg: Add GenSmmPageTable() to create smm page table
UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 page table
UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfo
UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock
UefiCpuPkg: Refinement to smm runtime InitPaging() code
UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary function
MdeModulePkg/Core/PiSmmCore/HeapGuard.c | 2 +-
OvmfPkg/CloudHv/CloudHvX64.dsc | 2 +-
OvmfPkg/Library/BaseMemEncryptSevLib/X64/PeiDxeVirtualMemory.c | 6 +++---
OvmfPkg/OvmfPkgIa32.dsc | 3 ++-
OvmfPkg/OvmfPkgIa32X64.dsc | 2 +-
OvmfPkg/OvmfPkgX64.dsc | 2 +-
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 5 +++--
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c | 3 +--
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c | 2 +-
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 132 ------------------------------------------------------------------------------------------------------------------------------------
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 43 +++++++++++++++++++++++++++++++++++++++++--
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 121 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 +
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 801 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 327 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 231 +++++++++++++++++++++++++++++++++------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c | 3 +--
UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c | 19 ++-----------------
UefiPayloadPkg/UefiPayloadPkg.dsc | 2 +-
19 files changed, 683 insertions(+), 1024 deletions(-)
--
2.31.1.windows.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [Patch V4 01/15] OvmfPkg: Add CpuPageTableLib required by PiSmmCpuDxe
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
@ 2023-05-16 9:59 ` duntan
2023-05-16 9:59 ` [Patch V4 02/15] UefiPayloadPkg: " duntan
` (13 subsequent siblings)
14 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Ard Biesheuvel, Jiewen Yao, Jordan Justen, Gerd Hoffmann, Ray Ni
Add CpuPageTableLib instance required by PiSmmCpuDxe in
corresponding DSC files of OvmfPkg.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Ray Ni <ray.ni@intel.com>
---
OvmfPkg/CloudHv/CloudHvX64.dsc | 2 +-
OvmfPkg/OvmfPkgIa32.dsc | 3 ++-
OvmfPkg/OvmfPkgIa32X64.dsc | 2 +-
OvmfPkg/OvmfPkgX64.dsc | 2 +-
4 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/OvmfPkg/CloudHv/CloudHvX64.dsc b/OvmfPkg/CloudHv/CloudHvX64.dsc
index 2a1139daaa..fd7c884d01 100644
--- a/OvmfPkg/CloudHv/CloudHvX64.dsc
+++ b/OvmfPkg/CloudHv/CloudHvX64.dsc
@@ -181,6 +181,7 @@
MemEncryptSevLib|OvmfPkg/Library/BaseMemEncryptSevLib/DxeMemEncryptSevLib.inf
PeiHardwareInfoLib|OvmfPkg/Library/HardwareInfoLib/PeiHardwareInfoLib.inf
DxeHardwareInfoLib|OvmfPkg/Library/HardwareInfoLib/DxeHardwareInfoLib.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
!if $(SMM_REQUIRE) == FALSE
LockBoxLib|OvmfPkg/Library/LockBoxLib/LockBoxBaseLib.inf
!endif
@@ -390,7 +391,6 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
- CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/OvmfPkgIa32.dsc b/OvmfPkg/OvmfPkgIa32.dsc
index e333b8b418..c07de40449 100644
--- a/OvmfPkg/OvmfPkgIa32.dsc
+++ b/OvmfPkg/OvmfPkgIa32.dsc
@@ -1,7 +1,7 @@
## @file
# EFI/Framework Open Virtual Machine Firmware (OVMF) platform
#
-# Copyright (c) 2006 - 2022, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
# Copyright (c) Microsoft Corporation.
#
@@ -186,6 +186,7 @@
MemEncryptTdxLib|OvmfPkg/Library/BaseMemEncryptTdxLib/BaseMemEncryptTdxLibNull.inf
PeiHardwareInfoLib|OvmfPkg/Library/HardwareInfoLib/PeiHardwareInfoLib.inf
DxeHardwareInfoLib|OvmfPkg/Library/HardwareInfoLib/DxeHardwareInfoLib.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
!if $(SMM_REQUIRE) == FALSE
LockBoxLib|OvmfPkg/Library/LockBoxLib/LockBoxBaseLib.inf
!endif
diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
index 25974230a2..481982f438 100644
--- a/OvmfPkg/OvmfPkgIa32X64.dsc
+++ b/OvmfPkg/OvmfPkgIa32X64.dsc
@@ -190,6 +190,7 @@
MemEncryptTdxLib|OvmfPkg/Library/BaseMemEncryptTdxLib/BaseMemEncryptTdxLibNull.inf
PeiHardwareInfoLib|OvmfPkg/Library/HardwareInfoLib/PeiHardwareInfoLib.inf
DxeHardwareInfoLib|OvmfPkg/Library/HardwareInfoLib/DxeHardwareInfoLib.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
!if $(SMM_REQUIRE) == FALSE
LockBoxLib|OvmfPkg/Library/LockBoxLib/LockBoxBaseLib.inf
!endif
@@ -402,7 +403,6 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
- CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
index c1762ffca4..67162ce510 100644
--- a/OvmfPkg/OvmfPkgX64.dsc
+++ b/OvmfPkg/OvmfPkgX64.dsc
@@ -203,6 +203,7 @@
MemEncryptTdxLib|OvmfPkg/Library/BaseMemEncryptTdxLib/BaseMemEncryptTdxLib.inf
PeiHardwareInfoLib|OvmfPkg/Library/HardwareInfoLib/PeiHardwareInfoLib.inf
DxeHardwareInfoLib|OvmfPkg/Library/HardwareInfoLib/DxeHardwareInfoLib.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
!if $(SMM_REQUIRE) == FALSE
LockBoxLib|OvmfPkg/Library/LockBoxLib/LockBoxBaseLib.inf
@@ -423,7 +424,6 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
- CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 02/15] UefiPayloadPkg: Add CpuPageTableLib required by PiSmmCpuDxe
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
2023-05-16 9:59 ` [Patch V4 01/15] OvmfPkg: Add CpuPageTableLib required by PiSmmCpuDxe duntan
@ 2023-05-16 9:59 ` duntan
2023-05-16 10:01 ` Guo, Gua
2023-05-16 9:59 ` [Patch V4 03/15] OvmfPkg:Remove code that apply AddressEncMask to non-leaf entry duntan
` (12 subsequent siblings)
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo
Add CpuPageTableLib required by PiSmmCpuDxeSmm in
UefiPayloadPkg.dsc.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
---
UefiPayloadPkg/UefiPayloadPkg.dsc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/UefiPayloadPkg/UefiPayloadPkg.dsc b/UefiPayloadPkg/UefiPayloadPkg.dsc
index 998d222909..66ec528ee6 100644
--- a/UefiPayloadPkg/UefiPayloadPkg.dsc
+++ b/UefiPayloadPkg/UefiPayloadPkg.dsc
@@ -206,6 +206,7 @@
OpensslLib|CryptoPkg/Library/OpensslLib/OpensslLib.inf
RngLib|MdePkg/Library/BaseRngLib/BaseRngLib.inf
HobLib|UefiPayloadPkg/Library/DxeHobLib/DxeHobLib.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
#
# UEFI & PI
@@ -345,7 +346,6 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCpuExceptionHandlerLib.inf
- CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
!if $(PERFORMANCE_MEASUREMENT_ENABLE)
PerformanceLib|MdeModulePkg/Library/DxePerformanceLib/DxePerformanceLib.inf
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 03/15] OvmfPkg:Remove code that apply AddressEncMask to non-leaf entry
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
2023-05-16 9:59 ` [Patch V4 01/15] OvmfPkg: Add CpuPageTableLib required by PiSmmCpuDxe duntan
2023-05-16 9:59 ` [Patch V4 02/15] UefiPayloadPkg: " duntan
@ 2023-05-16 9:59 ` duntan
2023-05-16 9:59 ` [Patch V4 04/15] MdeModulePkg: Remove RO and NX protection when unset guard page duntan
` (11 subsequent siblings)
14 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel
Cc: Ard Biesheuvel, Jiewen Yao, Jordan Justen, Gerd Hoffmann,
Tom Lendacky, Ray Ni
Remove code that apply AddressEncMask to non-leaf entry when split
smm page table by MemEncryptSevLib. In FvbServicesSmm driver, it
calls MemEncryptSevClearMmioPageEncMask to clear AddressEncMask
bit in page table for a specific range. In AMD SEV feature, this
AddressEncMask bit in page table is used to indicate if the memory
is guest private memory or shared memory. But all memory used by
page table are treated as encrypted regardless of encryption bit.
So remove the EncMask bit for smm non-leaf page table entry
doesn't impact AMD SEV feature.
If page split happens in the AddressEncMask bit clear process,
there will be some new non-leaf entries with AddressEncMask
applied in smm page table. When ReadyToLock, code in PiSmmCpuDxe
module will use CpuPageTableLib to modify smm page table. So
remove code to apply AddressEncMask for new non-leaf entries
since CpuPageTableLib doesn't consume the EncMask PCD.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Ray Ni <ray.ni@intel.com>
---
OvmfPkg/Library/BaseMemEncryptSevLib/X64/PeiDxeVirtualMemory.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/OvmfPkg/Library/BaseMemEncryptSevLib/X64/PeiDxeVirtualMemory.c b/OvmfPkg/Library/BaseMemEncryptSevLib/X64/PeiDxeVirtualMemory.c
index a1f6e61c1e..f2b821f6d9 100644
--- a/OvmfPkg/Library/BaseMemEncryptSevLib/X64/PeiDxeVirtualMemory.c
+++ b/OvmfPkg/Library/BaseMemEncryptSevLib/X64/PeiDxeVirtualMemory.c
@@ -233,7 +233,7 @@ Split2MPageTo4K (
// Fill in 2M page entry.
//
*PageEntry2M = ((UINT64)(UINTN)PageTableEntry1 |
- IA32_PG_P | IA32_PG_RW | AddressEncMask);
+ IA32_PG_P | IA32_PG_RW);
}
/**
@@ -352,7 +352,7 @@ SetPageTablePoolReadOnly (
PhysicalAddress += LevelSize[Level - 1];
}
- PageTable[Index] = (UINT64)(UINTN)NewPageTable | AddressEncMask |
+ PageTable[Index] = (UINT64)(UINTN)NewPageTable |
IA32_PG_P | IA32_PG_RW;
PageTable = NewPageTable;
}
@@ -440,7 +440,7 @@ Split1GPageTo2M (
// Fill in 1G page entry.
//
*PageEntry1G = ((UINT64)(UINTN)PageDirectoryEntry |
- IA32_PG_P | IA32_PG_RW | AddressEncMask);
+ IA32_PG_P | IA32_PG_RW);
PhysicalAddress2M = PhysicalAddress;
for (IndexOfPageDirectoryEntries = 0;
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 04/15] MdeModulePkg: Remove RO and NX protection when unset guard page
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (2 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 03/15] OvmfPkg:Remove code that apply AddressEncMask to non-leaf entry duntan
@ 2023-05-16 9:59 ` duntan
2023-05-16 19:04 ` [edk2-devel] " Kun Qin
2023-05-16 9:59 ` [Patch V4 05/15] UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute duntan
` (10 subsequent siblings)
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Liming Gao, Ray Ni, Jian J Wang
Remove RO and NX protection when unset guard page.
When UnsetGuardPage(), remove all the memory attribute protection
for guarded page.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
---
MdeModulePkg/Core/PiSmmCore/HeapGuard.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
index 8f3bab6fee..7daeeccf13 100644
--- a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
+++ b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
@@ -553,7 +553,7 @@ UnsetGuardPage (
mSmmMemoryAttribute,
BaseAddress,
EFI_PAGE_SIZE,
- EFI_MEMORY_RP
+ EFI_MEMORY_RP|EFI_MEMORY_RO|EFI_MEMORY_XP
);
ASSERT_EFI_ERROR (Status);
mOnGuarding = FALSE;
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 05/15] UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute.
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (3 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 04/15] MdeModulePkg: Remove RO and NX protection when unset guard page duntan
@ 2023-05-16 9:59 ` duntan
2023-06-01 1:09 ` Ni, Ray
2023-05-16 9:59 ` [Patch V4 06/15] UefiCpuPkg/PiSmmCpuDxeSmm: Avoid setting non-present range to RO/NX duntan
` (9 subsequent siblings)
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
Simplify the ConvertMemoryPageAttributes API to convert paging
attribute by CpuPageTableLib. In the new API, it calls
PageTableMap() to update the page attributes of a memory range.
With the PageTableMap() API in CpuPageTableLib, we can remove
the complicated page table manipulating code.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 3 ++-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 28 +++++++++++++---------------
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 +
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 443 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 9 +++++++--
5 files changed, 156 insertions(+), 328 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
index 34bf6e1a25..9c8107080a 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
@@ -1,7 +1,7 @@
/** @file
Page table manipulation functions for IA-32 processors
-Copyright (c) 2009 - 2019, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2009 - 2023, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
@@ -31,6 +31,7 @@ SmmInitPageTable (
InitializeSpinLock (mPFLock);
mPhysicalAddressBits = 32;
+ mPagingMode = PagingPae;
if (FeaturePcdGet (PcdCpuSmmProfileEnable) ||
HEAP_GUARD_NONSTOP_MODE ||
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index a5c2bdd971..ba341cadc6 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -50,6 +50,7 @@ SPDX-License-Identifier: BSD-2-Clause-Patent
#include <Library/SmmCpuFeaturesLib.h>
#include <Library/PeCoffGetEntryPointLib.h>
#include <Library/RegisterCpuFeaturesLib.h>
+#include <Library/CpuPageTableLib.h>
#include <AcpiCpuData.h>
#include <CpuHotPlugData.h>
@@ -260,6 +261,7 @@ extern UINTN mNumberOfCpus;
extern EFI_SMM_CPU_PROTOCOL mSmmCpu;
extern EFI_MM_MP_PROTOCOL mSmmMp;
extern BOOLEAN m5LevelPagingNeeded;
+extern PAGING_MODE mPagingMode;
///
/// The mode of the CPU at the time an SMI occurs
@@ -1008,11 +1010,10 @@ SetPageTableAttributes (
Length from their current attributes to the attributes specified by Attributes.
@param[in] PageTableBase The page table base.
- @param[in] EnablePML5Paging If PML5 paging is enabled.
+ @param[in] PagingMode The paging mode.
@param[in] BaseAddress The physical address that is the start address of a memory region.
@param[in] Length The size in bytes of the memory region.
@param[in] Attributes The bit mask of attributes to set for the memory region.
- @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
@retval EFI_SUCCESS The attributes were set for the memory region.
@retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
@@ -1030,12 +1031,11 @@ SetPageTableAttributes (
**/
EFI_STATUS
SmmSetMemoryAttributesEx (
- IN UINTN PageTableBase,
- IN BOOLEAN EnablePML5Paging,
- IN EFI_PHYSICAL_ADDRESS BaseAddress,
- IN UINT64 Length,
- IN UINT64 Attributes,
- OUT BOOLEAN *IsSplitted OPTIONAL
+ IN UINTN PageTableBase,
+ IN PAGING_MODE PagingMode,
+ IN PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 Attributes
);
/**
@@ -1043,34 +1043,32 @@ SmmSetMemoryAttributesEx (
Length from their current attributes to the attributes specified by Attributes.
@param[in] PageTableBase The page table base.
- @param[in] EnablePML5Paging If PML5 paging is enabled.
+ @param[in] PagingMode The paging mode.
@param[in] BaseAddress The physical address that is the start address of a memory region.
@param[in] Length The size in bytes of the memory region.
@param[in] Attributes The bit mask of attributes to clear for the memory region.
- @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
@retval EFI_SUCCESS The attributes were cleared for the memory region.
@retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
BaseAddress and Length cannot be modified.
@retval EFI_INVALID_PARAMETER Length is zero.
Attributes specified an illegal combination of attributes that
- cannot be set together.
+ cannot be cleared together.
@retval EFI_OUT_OF_RESOURCES There are not enough system resources to modify the attributes of
the memory resource range.
@retval EFI_UNSUPPORTED The processor does not support one or more bytes of the memory
resource range specified by BaseAddress and Length.
- The bit mask of attributes is not support for the memory resource
+ The bit mask of attributes is not supported for the memory resource
range specified by BaseAddress and Length.
**/
EFI_STATUS
SmmClearMemoryAttributesEx (
IN UINTN PageTableBase,
- IN BOOLEAN EnablePML5Paging,
+ IN PAGING_MODE PagingMode,
IN EFI_PHYSICAL_ADDRESS BaseAddress,
IN UINT64 Length,
- IN UINT64 Attributes,
- OUT BOOLEAN *IsSplitted OPTIONAL
+ IN UINT64 Attributes
);
/**
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
index 158e05e264..38d4e950a4 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
@@ -97,6 +97,7 @@
ReportStatusCodeLib
SmmCpuFeaturesLib
PeCoffGetEntryPointLib
+ CpuPageTableLib
[Protocols]
gEfiSmmAccess2ProtocolGuid ## CONSUMES
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index 834a756061..73ad9fb017 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -1,6 +1,6 @@
/** @file
-Copyright (c) 2016 - 2019, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2016 - 2023, Intel Corporation. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
**/
@@ -26,14 +26,9 @@ UINTN mGcdMemNumberOfDesc = 0;
EFI_MEMORY_ATTRIBUTES_TABLE *mUefiMemoryAttributesTable = NULL;
-PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
- { Page4K, SIZE_4KB, PAGING_4K_ADDRESS_MASK_64 },
- { Page2M, SIZE_2MB, PAGING_2M_ADDRESS_MASK_64 },
- { Page1G, SIZE_1GB, PAGING_1G_ADDRESS_MASK_64 },
-};
-
-BOOLEAN mIsShadowStack = FALSE;
-BOOLEAN m5LevelPagingNeeded = FALSE;
+BOOLEAN mIsShadowStack = FALSE;
+BOOLEAN m5LevelPagingNeeded = FALSE;
+PAGING_MODE mPagingMode = PagingModeMax;
//
// Global variable to keep track current available memory used as page table.
@@ -185,52 +180,6 @@ AllocatePageTableMemory (
return Buffer;
}
-/**
- Return length according to page attributes.
-
- @param[in] PageAttributes The page attribute of the page entry.
-
- @return The length of page entry.
-**/
-UINTN
-PageAttributeToLength (
- IN PAGE_ATTRIBUTE PageAttribute
- )
-{
- UINTN Index;
-
- for (Index = 0; Index < sizeof (mPageAttributeTable)/sizeof (mPageAttributeTable[0]); Index++) {
- if (PageAttribute == mPageAttributeTable[Index].Attribute) {
- return (UINTN)mPageAttributeTable[Index].Length;
- }
- }
-
- return 0;
-}
-
-/**
- Return address mask according to page attributes.
-
- @param[in] PageAttributes The page attribute of the page entry.
-
- @return The address mask of page entry.
-**/
-UINTN
-PageAttributeToMask (
- IN PAGE_ATTRIBUTE PageAttribute
- )
-{
- UINTN Index;
-
- for (Index = 0; Index < sizeof (mPageAttributeTable)/sizeof (mPageAttributeTable[0]); Index++) {
- if (PageAttribute == mPageAttributeTable[Index].Attribute) {
- return (UINTN)mPageAttributeTable[Index].AddressMask;
- }
- }
-
- return 0;
-}
-
/**
Return page table entry to match the address.
@@ -353,181 +302,6 @@ GetAttributesFromPageEntry (
return Attributes;
}
-/**
- Modify memory attributes of page entry.
-
- @param[in] PageEntry The page entry.
- @param[in] Attributes The bit mask of attributes to modify for the memory region.
- @param[in] IsSet TRUE means to set attributes. FALSE means to clear attributes.
- @param[out] IsModified TRUE means page table modified. FALSE means page table not modified.
-**/
-VOID
-ConvertPageEntryAttribute (
- IN UINT64 *PageEntry,
- IN UINT64 Attributes,
- IN BOOLEAN IsSet,
- OUT BOOLEAN *IsModified
- )
-{
- UINT64 CurrentPageEntry;
- UINT64 NewPageEntry;
-
- CurrentPageEntry = *PageEntry;
- NewPageEntry = CurrentPageEntry;
- if ((Attributes & EFI_MEMORY_RP) != 0) {
- if (IsSet) {
- NewPageEntry &= ~(UINT64)IA32_PG_P;
- } else {
- NewPageEntry |= IA32_PG_P;
- }
- }
-
- if ((Attributes & EFI_MEMORY_RO) != 0) {
- if (IsSet) {
- NewPageEntry &= ~(UINT64)IA32_PG_RW;
- if (mIsShadowStack) {
- // Environment setup
- // ReadOnly page need set Dirty bit for shadow stack
- NewPageEntry |= IA32_PG_D;
- // Clear user bit for supervisor shadow stack
- NewPageEntry &= ~(UINT64)IA32_PG_U;
- } else {
- // Runtime update
- // Clear dirty bit for non shadow stack, to protect RO page.
- NewPageEntry &= ~(UINT64)IA32_PG_D;
- }
- } else {
- NewPageEntry |= IA32_PG_RW;
- }
- }
-
- if ((Attributes & EFI_MEMORY_XP) != 0) {
- if (mXdSupported) {
- if (IsSet) {
- NewPageEntry |= IA32_PG_NX;
- } else {
- NewPageEntry &= ~IA32_PG_NX;
- }
- }
- }
-
- *PageEntry = NewPageEntry;
- if (CurrentPageEntry != NewPageEntry) {
- *IsModified = TRUE;
- DEBUG ((DEBUG_VERBOSE, "ConvertPageEntryAttribute 0x%lx", CurrentPageEntry));
- DEBUG ((DEBUG_VERBOSE, "->0x%lx\n", NewPageEntry));
- } else {
- *IsModified = FALSE;
- }
-}
-
-/**
- This function returns if there is need to split page entry.
-
- @param[in] BaseAddress The base address to be checked.
- @param[in] Length The length to be checked.
- @param[in] PageEntry The page entry to be checked.
- @param[in] PageAttribute The page attribute of the page entry.
-
- @retval SplitAttributes on if there is need to split page entry.
-**/
-PAGE_ATTRIBUTE
-NeedSplitPage (
- IN PHYSICAL_ADDRESS BaseAddress,
- IN UINT64 Length,
- IN UINT64 *PageEntry,
- IN PAGE_ATTRIBUTE PageAttribute
- )
-{
- UINT64 PageEntryLength;
-
- PageEntryLength = PageAttributeToLength (PageAttribute);
-
- if (((BaseAddress & (PageEntryLength - 1)) == 0) && (Length >= PageEntryLength)) {
- return PageNone;
- }
-
- if (((BaseAddress & PAGING_2M_MASK) != 0) || (Length < SIZE_2MB)) {
- return Page4K;
- }
-
- return Page2M;
-}
-
-/**
- This function splits one page entry to small page entries.
-
- @param[in] PageEntry The page entry to be splitted.
- @param[in] PageAttribute The page attribute of the page entry.
- @param[in] SplitAttribute How to split the page entry.
-
- @retval RETURN_SUCCESS The page entry is splitted.
- @retval RETURN_UNSUPPORTED The page entry does not support to be splitted.
- @retval RETURN_OUT_OF_RESOURCES No resource to split page entry.
-**/
-RETURN_STATUS
-SplitPage (
- IN UINT64 *PageEntry,
- IN PAGE_ATTRIBUTE PageAttribute,
- IN PAGE_ATTRIBUTE SplitAttribute
- )
-{
- UINT64 BaseAddress;
- UINT64 *NewPageEntry;
- UINTN Index;
-
- ASSERT (PageAttribute == Page2M || PageAttribute == Page1G);
-
- if (PageAttribute == Page2M) {
- //
- // Split 2M to 4K
- //
- ASSERT (SplitAttribute == Page4K);
- if (SplitAttribute == Page4K) {
- NewPageEntry = AllocatePageTableMemory (1);
- DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry));
- if (NewPageEntry == NULL) {
- return RETURN_OUT_OF_RESOURCES;
- }
-
- BaseAddress = *PageEntry & PAGING_2M_ADDRESS_MASK_64;
- for (Index = 0; Index < SIZE_4KB / sizeof (UINT64); Index++) {
- NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) | mAddressEncMask | ((*PageEntry) & PAGE_PROGATE_BITS);
- }
-
- (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- return RETURN_SUCCESS;
- } else {
- return RETURN_UNSUPPORTED;
- }
- } else if (PageAttribute == Page1G) {
- //
- // Split 1G to 2M
- // No need support 1G->4K directly, we should use 1G->2M, then 2M->4K to get more compact page table.
- //
- ASSERT (SplitAttribute == Page2M || SplitAttribute == Page4K);
- if (((SplitAttribute == Page2M) || (SplitAttribute == Page4K))) {
- NewPageEntry = AllocatePageTableMemory (1);
- DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry));
- if (NewPageEntry == NULL) {
- return RETURN_OUT_OF_RESOURCES;
- }
-
- BaseAddress = *PageEntry & PAGING_1G_ADDRESS_MASK_64;
- for (Index = 0; Index < SIZE_4KB / sizeof (UINT64); Index++) {
- NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) | mAddressEncMask | IA32_PG_PS | ((*PageEntry) & PAGE_PROGATE_BITS);
- }
-
- (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- return RETURN_SUCCESS;
- } else {
- return RETURN_UNSUPPORTED;
- }
- } else {
- return RETURN_UNSUPPORTED;
- }
-}
-
/**
This function modifies the page attributes for the memory region specified by BaseAddress and
Length from their current attributes to the attributes specified by Attributes.
@@ -535,12 +309,11 @@ SplitPage (
Caller should make sure BaseAddress and Length is at page boundary.
@param[in] PageTableBase The page table base.
- @param[in] EnablePML5Paging If PML5 paging is enabled.
+ @param[in] PagingMode The paging mode.
@param[in] BaseAddress The physical address that is the start address of a memory region.
@param[in] Length The size in bytes of the memory region.
@param[in] Attributes The bit mask of attributes to modify for the memory region.
@param[in] IsSet TRUE means to set attributes. FALSE means to clear attributes.
- @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
@param[out] IsModified TRUE means page table modified. FALSE means page table not modified.
@retval RETURN_SUCCESS The attributes were modified for the memory region.
@@ -559,28 +332,30 @@ SplitPage (
RETURN_STATUS
ConvertMemoryPageAttributes (
IN UINTN PageTableBase,
- IN BOOLEAN EnablePML5Paging,
+ IN PAGING_MODE PagingMode,
IN PHYSICAL_ADDRESS BaseAddress,
IN UINT64 Length,
IN UINT64 Attributes,
IN BOOLEAN IsSet,
- OUT BOOLEAN *IsSplitted OPTIONAL,
OUT BOOLEAN *IsModified OPTIONAL
)
{
- UINT64 *PageEntry;
- PAGE_ATTRIBUTE PageAttribute;
- UINTN PageEntryLength;
- PAGE_ATTRIBUTE SplitAttribute;
RETURN_STATUS Status;
- BOOLEAN IsEntryModified;
+ IA32_MAP_ATTRIBUTE PagingAttribute;
+ IA32_MAP_ATTRIBUTE PagingAttrMask;
+ UINTN PageTableBufferSize;
+ VOID *PageTableBuffer;
EFI_PHYSICAL_ADDRESS MaximumSupportMemAddress;
+ IA32_MAP_ENTRY *Map;
+ UINTN Count;
+ UINTN Index;
ASSERT (Attributes != 0);
ASSERT ((Attributes & ~EFI_MEMORY_ATTRIBUTE_MASK) == 0);
ASSERT ((BaseAddress & (SIZE_4KB - 1)) == 0);
ASSERT ((Length & (SIZE_4KB - 1)) == 0);
+ ASSERT (PageTableBase != 0);
if (Length == 0) {
return RETURN_INVALID_PARAMETER;
@@ -599,61 +374,121 @@ ConvertMemoryPageAttributes (
return RETURN_UNSUPPORTED;
}
- // DEBUG ((DEBUG_ERROR, "ConvertMemoryPageAttributes(%x) - %016lx, %016lx, %02lx\n", IsSet, BaseAddress, Length, Attributes));
+ PagingAttribute.Uint64 = 0;
+ PagingAttribute.Uint64 = mAddressEncMask | BaseAddress;
+ PagingAttrMask.Uint64 = 0;
- if (IsSplitted != NULL) {
- *IsSplitted = FALSE;
- }
-
- if (IsModified != NULL) {
- *IsModified = FALSE;
+ if ((Attributes & EFI_MEMORY_RO) != 0) {
+ PagingAttrMask.Bits.ReadWrite = 1;
+ if (IsSet) {
+ PagingAttribute.Bits.ReadWrite = 0;
+ PagingAttrMask.Bits.Dirty = 1;
+ if (mIsShadowStack) {
+ // Environment setup
+ // ReadOnly page need set Dirty bit for shadow stack
+ PagingAttribute.Bits.Dirty = 1;
+ // Clear user bit for supervisor shadow stack
+ PagingAttribute.Bits.UserSupervisor = 0;
+ PagingAttrMask.Bits.UserSupervisor = 1;
+ } else {
+ // Runtime update
+ // Clear dirty bit for non shadow stack, to protect RO page.
+ PagingAttribute.Bits.Dirty = 0;
+ }
+ } else {
+ PagingAttribute.Bits.ReadWrite = 1;
+ }
}
- //
- // Below logic is to check 2M/4K page to make sure we do not waste memory.
- //
- while (Length != 0) {
- PageEntry = GetPageTableEntry (PageTableBase, EnablePML5Paging, BaseAddress, &PageAttribute);
- if (PageEntry == NULL) {
- return RETURN_UNSUPPORTED;
+ if ((Attributes & EFI_MEMORY_XP) != 0) {
+ if (mXdSupported) {
+ PagingAttribute.Bits.Nx = IsSet ? 1 : 0;
+ PagingAttrMask.Bits.Nx = 1;
}
+ }
- PageEntryLength = PageAttributeToLength (PageAttribute);
- SplitAttribute = NeedSplitPage (BaseAddress, Length, PageEntry, PageAttribute);
- if (SplitAttribute == PageNone) {
- ConvertPageEntryAttribute (PageEntry, Attributes, IsSet, &IsEntryModified);
- if (IsEntryModified) {
- if (IsModified != NULL) {
- *IsModified = TRUE;
- }
- }
-
+ if ((Attributes & EFI_MEMORY_RP) != 0) {
+ if (IsSet) {
+ PagingAttribute.Bits.Present = 0;
//
- // Convert success, move to next
+ // When map a range to non-present, all attributes except Present should not be provided.
//
- BaseAddress += PageEntryLength;
- Length -= PageEntryLength;
+ PagingAttrMask.Uint64 = 0;
+ PagingAttrMask.Bits.Present = 1;
} else {
- Status = SplitPage (PageEntry, PageAttribute, SplitAttribute);
- if (RETURN_ERROR (Status)) {
- return RETURN_UNSUPPORTED;
- }
+ //
+ // When map range to present range, provide all attributes.
+ //
+ PagingAttribute.Bits.Present = 1;
+ PagingAttrMask.Uint64 = MAX_UINT64;
- if (IsSplitted != NULL) {
- *IsSplitted = TRUE;
- }
+ //
+ // By default memory is Ring 3 accessble.
+ //
+ PagingAttribute.Bits.UserSupervisor = 1;
+
+ DEBUG_CODE (
+ if (((Attributes & EFI_MEMORY_RO) == 0) || (((Attributes & EFI_MEMORY_XP) == 0) && (mXdSupported))) {
+ //
+ // When mapping a range to present and EFI_MEMORY_RO or EFI_MEMORY_XP is not specificed,
+ // check if [BaseAddress, BaseAddress + Length] contains present range.
+ // Existing Present range in [BaseAddress, BaseAddress + Length] is set to NX disable and ReadOnly.
+ //
+ Count = 0;
+ Map = NULL;
+ Status = PageTableParse (PageTableBase, mPagingMode, NULL, &Count);
+ while (Status == RETURN_BUFFER_TOO_SMALL) {
+ if (Map != NULL) {
+ FreePool (Map);
+ }
- if (IsModified != NULL) {
- *IsModified = TRUE;
+ Map = AllocatePool (Count * sizeof (IA32_MAP_ENTRY));
+ ASSERT (Map != NULL);
+ Status = PageTableParse (PageTableBase, mPagingMode, Map, &Count);
+ }
+
+ ASSERT_RETURN_ERROR (Status);
+
+ for (Index = 0; Index < Count; Index++) {
+ if ((BaseAddress < Map[Index].LinearAddress +
+ Map[Index].Length) && (BaseAddress + Length > Map[Index].LinearAddress))
+ {
+ DEBUG ((DEBUG_ERROR, "SMM ConvertMemoryPageAttributes: Existing Present range in [0x%lx, 0x%lx] is set to NX disable and ReadOnly\n", BaseAddress, BaseAddress + Length));
+ break;
+ }
+ }
+
+ FreePool (Map);
}
- //
- // Just split current page
- // Convert success in next around
- //
+ );
}
}
+ if (PagingAttrMask.Uint64 == 0) {
+ return RETURN_SUCCESS;
+ }
+
+ PageTableBufferSize = 0;
+ Status = PageTableMap (&PageTableBase, PagingMode, NULL, &PageTableBufferSize, BaseAddress, Length, &PagingAttribute, &PagingAttrMask, IsModified);
+
+ if (Status == RETURN_BUFFER_TOO_SMALL) {
+ PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES (PageTableBufferSize));
+ ASSERT (PageTableBuffer != NULL);
+ Status = PageTableMap (&PageTableBase, PagingMode, PageTableBuffer, &PageTableBufferSize, BaseAddress, Length, &PagingAttribute, &PagingAttrMask, IsModified);
+ }
+
+ if (Status == RETURN_INVALID_PARAMETER) {
+ //
+ // The only reason that PageTableMap returns RETURN_INVALID_PARAMETER here is to modify other attributes
+ // of a non-present range but remains the non-present range still as non-present.
+ //
+ DEBUG ((DEBUG_ERROR, "SMM ConvertMemoryPageAttributes: Non-present range in [0x%lx, 0x%lx] needs to be removed\n", BaseAddress, BaseAddress + Length));
+ }
+
+ ASSERT_RETURN_ERROR (Status);
+ ASSERT (PageTableBufferSize == 0);
+
return RETURN_SUCCESS;
}
@@ -697,11 +532,10 @@ FlushTlbForAll (
Length from their current attributes to the attributes specified by Attributes.
@param[in] PageTableBase The page table base.
- @param[in] EnablePML5Paging If PML5 paging is enabled.
+ @param[in] PagingMode The paging mode.
@param[in] BaseAddress The physical address that is the start address of a memory region.
@param[in] Length The size in bytes of the memory region.
@param[in] Attributes The bit mask of attributes to set for the memory region.
- @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
@retval EFI_SUCCESS The attributes were set for the memory region.
@retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
@@ -720,17 +554,16 @@ FlushTlbForAll (
EFI_STATUS
SmmSetMemoryAttributesEx (
IN UINTN PageTableBase,
- IN BOOLEAN EnablePML5Paging,
+ IN PAGING_MODE PagingMode,
IN EFI_PHYSICAL_ADDRESS BaseAddress,
IN UINT64 Length,
- IN UINT64 Attributes,
- OUT BOOLEAN *IsSplitted OPTIONAL
+ IN UINT64 Attributes
)
{
EFI_STATUS Status;
BOOLEAN IsModified;
- Status = ConvertMemoryPageAttributes (PageTableBase, EnablePML5Paging, BaseAddress, Length, Attributes, TRUE, IsSplitted, &IsModified);
+ Status = ConvertMemoryPageAttributes (PageTableBase, PagingMode, BaseAddress, Length, Attributes, TRUE, &IsModified);
if (!EFI_ERROR (Status)) {
if (IsModified) {
//
@@ -748,11 +581,10 @@ SmmSetMemoryAttributesEx (
Length from their current attributes to the attributes specified by Attributes.
@param[in] PageTableBase The page table base.
- @param[in] EnablePML5Paging If PML5 paging is enabled.
+ @param[in] PagingMode The paging mode.
@param[in] BaseAddress The physical address that is the start address of a memory region.
@param[in] Length The size in bytes of the memory region.
@param[in] Attributes The bit mask of attributes to clear for the memory region.
- @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
@retval EFI_SUCCESS The attributes were cleared for the memory region.
@retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
@@ -771,17 +603,16 @@ SmmSetMemoryAttributesEx (
EFI_STATUS
SmmClearMemoryAttributesEx (
IN UINTN PageTableBase,
- IN BOOLEAN EnablePML5Paging,
+ IN PAGING_MODE PagingMode,
IN EFI_PHYSICAL_ADDRESS BaseAddress,
IN UINT64 Length,
- IN UINT64 Attributes,
- OUT BOOLEAN *IsSplitted OPTIONAL
+ IN UINT64 Attributes
)
{
EFI_STATUS Status;
BOOLEAN IsModified;
- Status = ConvertMemoryPageAttributes (PageTableBase, EnablePML5Paging, BaseAddress, Length, Attributes, FALSE, IsSplitted, &IsModified);
+ Status = ConvertMemoryPageAttributes (PageTableBase, PagingMode, BaseAddress, Length, Attributes, FALSE, &IsModified);
if (!EFI_ERROR (Status)) {
if (IsModified) {
//
@@ -823,14 +654,10 @@ SmmSetMemoryAttributes (
IN UINT64 Attributes
)
{
- IA32_CR4 Cr4;
- UINTN PageTableBase;
- BOOLEAN Enable5LevelPaging;
-
- PageTableBase = AsmReadCr3 () & PAGING_4K_ADDRESS_MASK_64;
- Cr4.UintN = AsmReadCr4 ();
- Enable5LevelPaging = (BOOLEAN)(Cr4.Bits.LA57 == 1);
- return SmmSetMemoryAttributesEx (PageTableBase, Enable5LevelPaging, BaseAddress, Length, Attributes, NULL);
+ UINTN PageTableBase;
+
+ PageTableBase = AsmReadCr3 () & PAGING_4K_ADDRESS_MASK_64;
+ return SmmSetMemoryAttributesEx (PageTableBase, mPagingMode, BaseAddress, Length, Attributes);
}
/**
@@ -862,14 +689,10 @@ SmmClearMemoryAttributes (
IN UINT64 Attributes
)
{
- IA32_CR4 Cr4;
- UINTN PageTableBase;
- BOOLEAN Enable5LevelPaging;
-
- PageTableBase = AsmReadCr3 () & PAGING_4K_ADDRESS_MASK_64;
- Cr4.UintN = AsmReadCr4 ();
- Enable5LevelPaging = (BOOLEAN)(Cr4.Bits.LA57 == 1);
- return SmmClearMemoryAttributesEx (PageTableBase, Enable5LevelPaging, BaseAddress, Length, Attributes, NULL);
+ UINTN PageTableBase;
+
+ PageTableBase = AsmReadCr3 () & PAGING_4K_ADDRESS_MASK_64;
+ return SmmClearMemoryAttributesEx (PageTableBase, mPagingMode, BaseAddress, Length, Attributes);
}
/**
@@ -891,7 +714,7 @@ SetShadowStack (
EFI_STATUS Status;
mIsShadowStack = TRUE;
- Status = SmmSetMemoryAttributesEx (Cr3, m5LevelPagingNeeded, BaseAddress, Length, EFI_MEMORY_RO, NULL);
+ Status = SmmSetMemoryAttributesEx (Cr3, mPagingMode, BaseAddress, Length, EFI_MEMORY_RO);
mIsShadowStack = FALSE;
return Status;
@@ -915,7 +738,7 @@ SetNotPresentPage (
{
EFI_STATUS Status;
- Status = SmmSetMemoryAttributesEx (Cr3, m5LevelPagingNeeded, BaseAddress, Length, EFI_MEMORY_RP, NULL);
+ Status = SmmSetMemoryAttributesEx (Cr3, mPagingMode, BaseAddress, Length, EFI_MEMORY_RP);
return Status;
}
@@ -1799,7 +1622,7 @@ EnablePageTableProtection (
//
// Set entire pool including header, used-memory and left free-memory as ReadOnly in SMM page table.
//
- ConvertMemoryPageAttributes (PageTableBase, m5LevelPagingNeeded, Address, PoolSize, EFI_MEMORY_RO, TRUE, NULL, NULL);
+ ConvertMemoryPageAttributes (PageTableBase, mPagingMode, Address, PoolSize, EFI_MEMORY_RO, TRUE, NULL);
Pool = Pool->NextPool;
} while (Pool != HeadPool);
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
index 3deb1ffd67..a25a96f68c 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
@@ -1,7 +1,7 @@
/** @file
Page Fault (#PF) handler for X64 processors
-Copyright (c) 2009 - 2022, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2009 - 2023, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
@@ -353,7 +353,12 @@ SmmInitPageTable (
m1GPageTableSupport = Is1GPageSupport ();
m5LevelPagingNeeded = Is5LevelPagingNeeded ();
mPhysicalAddressBits = CalculateMaximumSupportAddress ();
- PatchInstructionX86 (gPatch5LevelPagingNeeded, m5LevelPagingNeeded, 1);
+ if (m5LevelPagingNeeded) {
+ mPagingMode = m1GPageTableSupport ? Paging5Level1GB : Paging5Level;
+ PatchInstructionX86 (gPatch5LevelPagingNeeded, TRUE, 1);
+ } else {
+ mPagingMode = m1GPageTableSupport ? Paging4Level1GB : Paging4Level;
+ }
DEBUG ((DEBUG_INFO, "5LevelPaging Needed - %d\n", m5LevelPagingNeeded));
DEBUG ((DEBUG_INFO, "1GPageTable Support - %d\n", m1GPageTableSupport));
DEBUG ((DEBUG_INFO, "PcdCpuSmmRestrictedMemoryAccess - %d\n", mCpuSmmRestrictedMemoryAccess));
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 06/15] UefiCpuPkg/PiSmmCpuDxeSmm: Avoid setting non-present range to RO/NX
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (4 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 05/15] UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute duntan
@ 2023-05-16 9:59 ` duntan
2023-05-16 9:59 ` [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP duntan
` (8 subsequent siblings)
14 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
In PiSmmCpuDxeSmm code, SetMemMapAttributes() marks memory ranges
in SmmMemoryAttributesTable to RO/NX. There may exist non-present
range in these memory ranges. Set other attributes for a non-present
range is not permitted in CpuPageTableMapLib. So add code to handle
this case. Only map the present ranges in SmmMemoryAttributesTable
to RO or NX.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 147 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----------------------
1 file changed, 125 insertions(+), 22 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index 73ad9fb017..2faee8f859 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -902,6 +902,89 @@ PatchGdtIdtMap (
);
}
+/**
+ This function remove the non-present range in [MemMapStart, MemMapLimit] and
+ set [MemMapStart, NonPresentRangeStart] as MemoryAttribute in page table.
+
+ @param MemMapStart Pointer to the start address of range.
+ @param MemMapLimit Limit address of range.
+ @param NonPresentRangeStart Start address of non-present range.
+ @param NonPresentRangeLimit Limit address of non-present range.
+ @param MemoryAttribute The bit mask of attributes to modify for the memory region.
+
+**/
+VOID
+RemoveNonPresentRange (
+ UINT64 *MemMapStart,
+ UINT64 MemMapLimit,
+ UINT64 NonPresentRangeStart,
+ UINT64 NonPresentRangeLimit,
+ UINT64 MemoryAttribute
+ )
+{
+ if (*MemMapStart < NonPresentRangeStart) {
+ SmmSetMemoryAttributes (
+ *MemMapStart,
+ NonPresentRangeStart - *MemMapStart,
+ MemoryAttribute
+ );
+ }
+
+ *MemMapStart = NonPresentRangeLimit;
+}
+
+/**
+ This function set [MemMapStart, MemMapLimit] to the input MemoryAttribute.
+
+ @param MemMapStart Start address of range.
+ @param MemMapLimit Limit address of range.
+ @param Map Pointer to the array of Cr3 IA32_MAP_ENTRY.
+ @param Count Count of IA32_MAP_ENTRY in Map.
+ @param MemoryAttribute The bit mask of attributes to modify for the memory region.
+
+**/
+VOID
+SetMemMapWithNonPresentRange (
+ UINT64 MemMapStart,
+ UINT64 MemMapLimit,
+ IA32_MAP_ENTRY *Map,
+ UINTN Count,
+ UINT64 MemoryAttribute
+ )
+{
+ UINTN Index;
+ UINT64 NonPresentRangeStart;
+
+ NonPresentRangeStart = 0;
+
+ for (Index = 0; Index < Count; Index++) {
+ if ((Map[Index].LinearAddress > NonPresentRangeStart) &&
+ (MemMapStart < Map[Index].LinearAddress) && (MemMapLimit > NonPresentRangeStart))
+ {
+ //
+ // [NonPresentRangeStart, Map[Index].LinearAddress] is non-present.
+ //
+ RemoveNonPresentRange (&MemMapStart, MemMapLimit, NonPresentRangeStart, Map[Index].LinearAddress, MemoryAttribute);
+ }
+
+ NonPresentRangeStart = Map[Index].LinearAddress + Map[Index].Length;
+ if (NonPresentRangeStart >= MemMapLimit) {
+ break;
+ }
+ }
+
+ //
+ // There is no non-present in current [MemMapStart, MemMapLimit] anymore.
+ //
+ if (MemMapStart < MemMapLimit) {
+ SmmSetMemoryAttributes (
+ MemMapStart,
+ MemMapLimit - MemMapStart,
+ MemoryAttribute
+ );
+ }
+}
+
/**
This function sets memory attribute according to MemoryAttributesTable.
**/
@@ -916,6 +999,11 @@ SetMemMapAttributes (
UINTN DescriptorSize;
UINTN Index;
EDKII_PI_SMM_MEMORY_ATTRIBUTES_TABLE *MemoryAttributesTable;
+ UINTN PageTable;
+ EFI_STATUS Status;
+ IA32_MAP_ENTRY *Map;
+ UINTN Count;
+ UINT64 MemoryAttribute;
SmmGetSystemConfigurationTable (&gEdkiiPiSmmMemoryAttributesTableGuid, (VOID **)&MemoryAttributesTable);
if (MemoryAttributesTable == NULL) {
@@ -942,36 +1030,51 @@ SetMemMapAttributes (
MemoryMap = NEXT_MEMORY_DESCRIPTOR (MemoryMap, DescriptorSize);
}
+ Count = 0;
+ Map = NULL;
+ PageTable = AsmReadCr3 ();
+ Status = PageTableParse (PageTable, mPagingMode, NULL, &Count);
+ while (Status == RETURN_BUFFER_TOO_SMALL) {
+ if (Map != NULL) {
+ FreePool (Map);
+ }
+
+ Map = AllocatePool (Count * sizeof (IA32_MAP_ENTRY));
+ ASSERT (Map != NULL);
+ Status = PageTableParse (PageTable, mPagingMode, Map, &Count);
+ }
+
+ ASSERT_RETURN_ERROR (Status);
+
MemoryMap = MemoryMapStart;
for (Index = 0; Index < MemoryMapEntryCount; Index++) {
DEBUG ((DEBUG_VERBOSE, "SetAttribute: Memory Entry - 0x%lx, 0x%x\n", MemoryMap->PhysicalStart, MemoryMap->NumberOfPages));
- switch (MemoryMap->Type) {
- case EfiRuntimeServicesCode:
- SmmSetMemoryAttributes (
- MemoryMap->PhysicalStart,
- EFI_PAGES_TO_SIZE ((UINTN)MemoryMap->NumberOfPages),
- EFI_MEMORY_RO
- );
- break;
- case EfiRuntimeServicesData:
- SmmSetMemoryAttributes (
- MemoryMap->PhysicalStart,
- EFI_PAGES_TO_SIZE ((UINTN)MemoryMap->NumberOfPages),
- EFI_MEMORY_XP
- );
- break;
- default:
- SmmSetMemoryAttributes (
- MemoryMap->PhysicalStart,
- EFI_PAGES_TO_SIZE ((UINTN)MemoryMap->NumberOfPages),
- EFI_MEMORY_XP
- );
- break;
+ if (MemoryMap->Type == EfiRuntimeServicesCode) {
+ MemoryAttribute = EFI_MEMORY_RO;
+ } else {
+ //
+ // Set other type memory as NX.
+ //
+ MemoryAttribute = EFI_MEMORY_XP;
}
+ //
+ // There may exist non-present range overlaps with the MemoryMap range.
+ // Do not change other attributes of non-present range while still remaining it as non-present
+ //
+ SetMemMapWithNonPresentRange (
+ MemoryMap->PhysicalStart,
+ MemoryMap->PhysicalStart + EFI_PAGES_TO_SIZE ((UINTN)MemoryMap->NumberOfPages),
+ Map,
+ Count,
+ MemoryAttribute
+ );
+
MemoryMap = NEXT_MEMORY_DESCRIPTOR (MemoryMap, DescriptorSize);
}
+ FreePool (Map);
+
PatchSmmSaveStateMap ();
PatchGdtIdtMap ();
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (5 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 06/15] UefiCpuPkg/PiSmmCpuDxeSmm: Avoid setting non-present range to RO/NX duntan
@ 2023-05-16 9:59 ` duntan
2023-05-20 2:00 ` [edk2-devel] " Kun Qin
2023-06-02 3:09 ` Ni, Ray
2023-05-16 9:59 ` [Patch V4 08/15] UefiCpuPkg/PiSmmCpuDxeSmm: Clear CR0.WP before modify page table duntan
` (7 subsequent siblings)
14 siblings, 2 replies; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
Add two functions to disable/enable CR0.WP. These two unctions
will also be used in later commits. This commit doesn't change any
functionality.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 24 ++++++++++++++++++++++++
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------------------------
2 files changed, 90 insertions(+), 49 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index ba341cadc6..e0c4ca76dc 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -1565,4 +1565,28 @@ SmmWaitForApArrival (
VOID
);
+/**
+ Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
+
+ @param[out] WpEnabled If Cr0.WP is enabled.
+ @param[out] CetEnabled If CET is enabled.
+**/
+VOID
+DisableReadOnlyPageWriteProtect (
+ OUT BOOLEAN *WpEnabled,
+ OUT BOOLEAN *CetEnabled
+ );
+
+/**
+ Enable Write Protect on pages marked as read-only.
+
+ @param[out] WpEnabled If Cr0.WP should be enabled.
+ @param[out] CetEnabled If CET should be enabled.
+**/
+VOID
+EnableReadOnlyPageWriteProtect (
+ BOOLEAN WpEnabled,
+ BOOLEAN CetEnabled
+ );
+
#endif
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index 2faee8f859..4b512edf68 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -40,6 +40,64 @@ PAGE_TABLE_POOL *mPageTablePool = NULL;
//
BOOLEAN mIsReadOnlyPageTable = FALSE;
+/**
+ Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
+
+ @param[out] WpEnabled If Cr0.WP is enabled.
+ @param[out] CetEnabled If CET is enabled.
+**/
+VOID
+DisableReadOnlyPageWriteProtect (
+ OUT BOOLEAN *WpEnabled,
+ OUT BOOLEAN *CetEnabled
+ )
+{
+ IA32_CR0 Cr0;
+
+ *CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
+ Cr0.UintN = AsmReadCr0 ();
+ *WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
+ if (*WpEnabled) {
+ if (*CetEnabled) {
+ //
+ // CET must be disabled if WP is disabled. Disable CET before clearing CR0.WP.
+ //
+ DisableCet ();
+ }
+
+ Cr0.Bits.WP = 0;
+ AsmWriteCr0 (Cr0.UintN);
+ }
+}
+
+/**
+ Enable Write Protect on pages marked as read-only.
+
+ @param[out] WpEnabled If Cr0.WP should be enabled.
+ @param[out] CetEnabled If CET should be enabled.
+**/
+VOID
+EnableReadOnlyPageWriteProtect (
+ BOOLEAN WpEnabled,
+ BOOLEAN CetEnabled
+ )
+{
+ IA32_CR0 Cr0;
+
+ if (WpEnabled) {
+ Cr0.UintN = AsmReadCr0 ();
+ Cr0.Bits.WP = 1;
+ AsmWriteCr0 (Cr0.UintN);
+
+ if (CetEnabled) {
+ //
+ // re-enable CET.
+ //
+ EnableCet ();
+ }
+ }
+}
+
/**
Initialize a buffer pool for page table use only.
@@ -62,10 +120,9 @@ InitializePageTablePool (
IN UINTN PoolPages
)
{
- VOID *Buffer;
- BOOLEAN CetEnabled;
- BOOLEAN WpEnabled;
- IA32_CR0 Cr0;
+ VOID *Buffer;
+ BOOLEAN WpEnabled;
+ BOOLEAN CetEnabled;
//
// Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one page for
@@ -102,34 +159,9 @@ InitializePageTablePool (
// If page table memory has been marked as RO, mark the new pool pages as read-only.
//
if (mIsReadOnlyPageTable) {
- CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
- Cr0.UintN = AsmReadCr0 ();
- WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
- if (WpEnabled) {
- if (CetEnabled) {
- //
- // CET must be disabled if WP is disabled. Disable CET before clearing CR0.WP.
- //
- DisableCet ();
- }
-
- Cr0.Bits.WP = 0;
- AsmWriteCr0 (Cr0.UintN);
- }
-
+ DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
SmmSetMemoryAttributes ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer, EFI_PAGES_TO_SIZE (PoolPages), EFI_MEMORY_RO);
- if (WpEnabled) {
- Cr0.UintN = AsmReadCr0 ();
- Cr0.Bits.WP = 1;
- AsmWriteCr0 (Cr0.UintN);
-
- if (CetEnabled) {
- //
- // re-enable CET.
- //
- EnableCet ();
- }
- }
+ EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
}
return TRUE;
@@ -1782,6 +1814,7 @@ SetPageTableAttributes (
VOID
)
{
+ BOOLEAN WpEnabled;
BOOLEAN CetEnabled;
if (!IfReadOnlyPageTableNeeded ()) {
@@ -1794,15 +1827,7 @@ SetPageTableAttributes (
// Disable write protection, because we need mark page table to be write protected.
// We need *write* page table memory, to mark itself to be *read only*.
//
- CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
- if (CetEnabled) {
- //
- // CET must be disabled if WP is disabled.
- //
- DisableCet ();
- }
-
- AsmWriteCr0 (AsmReadCr0 () & ~CR0_WP);
+ DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
// Set memory used by page table as Read Only.
DEBUG ((DEBUG_INFO, "Start...\n"));
@@ -1811,20 +1836,12 @@ SetPageTableAttributes (
//
// Enable write protection, after page table attribute updated.
//
- AsmWriteCr0 (AsmReadCr0 () | CR0_WP);
+ EnableReadOnlyPageWriteProtect (TRUE, CetEnabled);
mIsReadOnlyPageTable = TRUE;
//
// Flush TLB after mark all page table pool as read only.
//
FlushTlbForAll ();
-
- if (CetEnabled) {
- //
- // re-enable CET.
- //
- EnableCet ();
- }
-
return;
}
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 08/15] UefiCpuPkg/PiSmmCpuDxeSmm: Clear CR0.WP before modify page table
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (6 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP duntan
@ 2023-05-16 9:59 ` duntan
2023-06-02 3:12 ` [edk2-devel] " Ni, Ray
2023-05-16 9:59 ` [Patch V4 09/15] UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h duntan
` (6 subsequent siblings)
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
Clear CR0.WP before modify smm page table. Currently, there is
an assumption that smm pagetable is always RW before ReadyToLock.
However, when AMD SEV is enabled, FvbServicesSmm driver calls
MemEncryptSevClearMmioPageEncMask to clear AddressEncMask bit
in smm page table for this range:
[PcdOvmfFdBaseAddress,PcdOvmfFdBaseAddress+PcdOvmfFirmwareFdSize]
If page slpit happens in this process, new memory for smm page
table is allocated. Then the newly allocated page table memory
is marked as RO in smm page table in this FvbServicesSmm driver,
which may lead to PF if smm code doesn't clear CR0.WP before
modify smm page table when ReadyToLock.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 11 +++++++++++
UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 5 +++++
2 files changed, 16 insertions(+)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index 4b512edf68..ef0ba9a355 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -1036,6 +1036,8 @@ SetMemMapAttributes (
IA32_MAP_ENTRY *Map;
UINTN Count;
UINT64 MemoryAttribute;
+ BOOLEAN WpEnabled;
+ BOOLEAN CetEnabled;
SmmGetSystemConfigurationTable (&gEdkiiPiSmmMemoryAttributesTableGuid, (VOID **)&MemoryAttributesTable);
if (MemoryAttributesTable == NULL) {
@@ -1078,6 +1080,8 @@ SetMemMapAttributes (
ASSERT_RETURN_ERROR (Status);
+ DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
+
MemoryMap = MemoryMapStart;
for (Index = 0; Index < MemoryMapEntryCount; Index++) {
DEBUG ((DEBUG_VERBOSE, "SetAttribute: Memory Entry - 0x%lx, 0x%x\n", MemoryMap->PhysicalStart, MemoryMap->NumberOfPages));
@@ -1105,6 +1109,7 @@ SetMemMapAttributes (
MemoryMap = NEXT_MEMORY_DESCRIPTOR (MemoryMap, DescriptorSize);
}
+ EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
FreePool (Map);
PatchSmmSaveStateMap ();
@@ -1411,9 +1416,13 @@ SetUefiMemMapAttributes (
UINTN MemoryMapEntryCount;
UINTN Index;
EFI_MEMORY_DESCRIPTOR *Entry;
+ BOOLEAN WpEnabled;
+ BOOLEAN CetEnabled;
DEBUG ((DEBUG_INFO, "SetUefiMemMapAttributes\n"));
+ DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
+
if (mUefiMemoryMap != NULL) {
MemoryMapEntryCount = mUefiMemoryMapSize/mUefiDescriptorSize;
MemoryMap = mUefiMemoryMap;
@@ -1492,6 +1501,8 @@ SetUefiMemMapAttributes (
}
}
+ EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
+
//
// Do not free mUefiMemoryAttributesTable, it will be checked in IsSmmCommBufferForbiddenAddress().
//
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
index 1b0b6673e1..5625ba0cac 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
@@ -574,6 +574,8 @@ InitPaging (
BOOLEAN Nx;
IA32_CR4 Cr4;
BOOLEAN Enable5LevelPaging;
+ BOOLEAN WpEnabled;
+ BOOLEAN CetEnabled;
Cr4.UintN = AsmReadCr4 ();
Enable5LevelPaging = (BOOLEAN)(Cr4.Bits.LA57 == 1);
@@ -620,6 +622,7 @@ InitPaging (
NumberOfPdptEntries = 4;
}
+ DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
//
// Go through page table and change 2MB-page into 4KB-page.
//
@@ -800,6 +803,8 @@ InitPaging (
} // end for PML4
} // end for PML5
+ EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
+
//
// Flush TLB
//
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 09/15] UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (7 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 08/15] UefiCpuPkg/PiSmmCpuDxeSmm: Clear CR0.WP before modify page table duntan
@ 2023-05-16 9:59 ` duntan
2023-06-02 3:16 ` [edk2-devel] " Ni, Ray
2023-05-16 9:59 ` [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table duntan
` (5 subsequent siblings)
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h and remove
extern for mSmmShadowStackSize in c files to simplify code.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c | 3 +--
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 2 --
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 1 +
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 2 --
UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c | 3 +--
5 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
index 6c48a53f67..636dc8d92f 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
@@ -1,7 +1,7 @@
/** @file
SMM CPU misc functions for Ia32 arch specific.
-Copyright (c) 2015 - 2019, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
**/
@@ -14,7 +14,6 @@ EFI_PHYSICAL_ADDRESS mGdtBuffer;
UINTN mGdtBufferSize;
extern BOOLEAN mCetSupported;
-extern UINTN mSmmShadowStackSize;
X86_ASSEMBLY_PATCH_LABEL mPatchCetPl0Ssp;
X86_ASSEMBLY_PATCH_LABEL mPatchCetInterruptSsp;
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
index baf827cf9d..1878252eac 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
@@ -29,8 +29,6 @@ MM_COMPLETION mSmmStartupThisApToken;
//
UINT32 *mPackageFirstThreadIndex = NULL;
-extern UINTN mSmmShadowStackSize;
-
/**
Performs an atomic compare exchange operation to get semaphore.
The compare exchange operation must be performed using
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index e0c4ca76dc..a7da9673a5 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -262,6 +262,7 @@ extern EFI_SMM_CPU_PROTOCOL mSmmCpu;
extern EFI_MM_MP_PROTOCOL mSmmMp;
extern BOOLEAN m5LevelPagingNeeded;
extern PAGING_MODE mPagingMode;
+extern UINTN mSmmShadowStackSize;
///
/// The mode of the CPU at the time an SMI occurs
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
index a25a96f68c..25ced50955 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
@@ -13,8 +13,6 @@ SPDX-License-Identifier: BSD-2-Clause-Patent
#define PAGE_TABLE_PAGES 8
#define ACC_MAX_BIT BIT3
-extern UINTN mSmmShadowStackSize;
-
LIST_ENTRY mPagePool = INITIALIZE_LIST_HEAD_VARIABLE (mPagePool);
BOOLEAN m1GPageTableSupport = FALSE;
BOOLEAN mCpuSmmRestrictedMemoryAccess;
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
index 00a284c369..c4f21e2155 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
@@ -1,7 +1,7 @@
/** @file
SMM CPU misc functions for x64 arch specific.
-Copyright (c) 2015 - 2019, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
**/
@@ -12,7 +12,6 @@ EFI_PHYSICAL_ADDRESS mGdtBuffer;
UINTN mGdtBufferSize;
extern BOOLEAN mCetSupported;
-extern UINTN mSmmShadowStackSize;
X86_ASSEMBLY_PATCH_LABEL mPatchCetPl0Ssp;
X86_ASSEMBLY_PATCH_LABEL mPatchCetInterruptSsp;
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (8 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 09/15] UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h duntan
@ 2023-05-16 9:59 ` duntan
2023-06-02 3:23 ` [edk2-devel] " Ni, Ray
2023-05-16 9:59 ` [Patch V4 11/15] UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 " duntan
` (4 subsequent siblings)
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
This commit is code refinement to current smm pagetable generation
code. Add a new GenSmmPageTable() API to create smm page table
based on the PageTableMap() API in CpuPageTableLib. Caller only
needs to specify the paging mode and the PhysicalAddressBits to map.
This function can be used to create both IA32 pae paging and X64
5level, 4level paging.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 2 +-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 15 +++++++++++++++
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 65 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 220 ++++++++++++++++++++++++++--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
4 files changed, 107 insertions(+), 195 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
index 9c8107080a..b11264ce4a 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
@@ -63,7 +63,7 @@ SmmInitPageTable (
InitializeIDTSmmStackGuard ();
}
- return Gen4GPageTable (TRUE);
+ return GenSmmPageTable (PagingPae, mPhysicalAddressBits);
}
/**
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index a7da9673a5..5399659bc0 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -553,6 +553,21 @@ Gen4GPageTable (
IN BOOLEAN Is32BitPageTable
);
+/**
+ Create page table based on input PagingMode and PhysicalAddressBits in smm.
+
+ @param[in] PagingMode The paging mode.
+ @param[in] PhysicalAddressBits The bits of physical address to map.
+
+ @retval PageTable Address
+
+**/
+UINTN
+GenSmmPageTable (
+ IN PAGING_MODE PagingMode,
+ IN UINT8 PhysicalAddressBits
+ );
+
/**
Initialize global data for MP synchronization.
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index ef0ba9a355..138ff43c9d 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -1642,6 +1642,71 @@ EdkiiSmmClearMemoryAttributes (
return SmmClearMemoryAttributes (BaseAddress, Length, Attributes);
}
+/**
+ Create page table based on input PagingMode and PhysicalAddressBits in smm.
+
+ @param[in] PagingMode The paging mode.
+ @param[in] PhysicalAddressBits The bits of physical address to map.
+
+ @retval PageTable Address
+
+**/
+UINTN
+GenSmmPageTable (
+ IN PAGING_MODE PagingMode,
+ IN UINT8 PhysicalAddressBits
+ )
+{
+ UINTN PageTableBufferSize;
+ UINTN PageTable;
+ VOID *PageTableBuffer;
+ IA32_MAP_ATTRIBUTE MapAttribute;
+ IA32_MAP_ATTRIBUTE MapMask;
+ RETURN_STATUS Status;
+ UINTN GuardPage;
+ UINTN Index;
+ UINT64 Length;
+
+ Length = LShiftU64 (1, PhysicalAddressBits);
+ PageTable = 0;
+ PageTableBufferSize = 0;
+ MapMask.Uint64 = MAX_UINT64;
+ MapAttribute.Uint64 = mAddressEncMask;
+ MapAttribute.Bits.Present = 1;
+ MapAttribute.Bits.ReadWrite = 1;
+ MapAttribute.Bits.UserSupervisor = 1;
+ MapAttribute.Bits.Accessed = 1;
+ MapAttribute.Bits.Dirty = 1;
+
+ Status = PageTableMap (&PageTable, PagingMode, NULL, &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
+ ASSERT (Status == RETURN_BUFFER_TOO_SMALL);
+ DEBUG ((DEBUG_INFO, "GenSMMPageTable: 0x%x bytes needed for initial SMM page table\n", PageTableBufferSize));
+ PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES (PageTableBufferSize));
+ ASSERT (PageTableBuffer != NULL);
+ Status = PageTableMap (&PageTable, PagingMode, PageTableBuffer, &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
+ ASSERT (Status == RETURN_SUCCESS);
+ ASSERT (PageTableBufferSize == 0);
+
+ if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
+ //
+ // Mark the 4KB guard page between known good stack and smm stack as non-present
+ //
+ for (Index = 0; Index < gSmmCpuPrivate->SmmCoreEntryContext.NumberOfCpus; Index++) {
+ GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE + Index * (mSmmStackSize + mSmmShadowStackSize);
+ Status = ConvertMemoryPageAttributes (PageTable, PagingMode, GuardPage, SIZE_4KB, EFI_MEMORY_RP, TRUE, NULL);
+ }
+ }
+
+ if ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) {
+ //
+ // Mark [0, 4k] as non-present
+ //
+ Status = ConvertMemoryPageAttributes (PageTable, PagingMode, 0, SIZE_4KB, EFI_MEMORY_RP, TRUE, NULL);
+ }
+
+ return (UINTN)PageTable;
+}
+
/**
This function retrieves the attributes of the memory region specified by
BaseAddress and Length. If different attributes are got from different part
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
index 25ced50955..060e6dc147 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
@@ -167,160 +167,6 @@ CalculateMaximumSupportAddress (
return PhysicalAddressBits;
}
-/**
- Set static page table.
-
- @param[in] PageTable Address of page table.
- @param[in] PhysicalAddressBits The maximum physical address bits supported.
-**/
-VOID
-SetStaticPageTable (
- IN UINTN PageTable,
- IN UINT8 PhysicalAddressBits
- )
-{
- UINT64 PageAddress;
- UINTN NumberOfPml5EntriesNeeded;
- UINTN NumberOfPml4EntriesNeeded;
- UINTN NumberOfPdpEntriesNeeded;
- UINTN IndexOfPml5Entries;
- UINTN IndexOfPml4Entries;
- UINTN IndexOfPdpEntries;
- UINTN IndexOfPageDirectoryEntries;
- UINT64 *PageMapLevel5Entry;
- UINT64 *PageMapLevel4Entry;
- UINT64 *PageMap;
- UINT64 *PageDirectoryPointerEntry;
- UINT64 *PageDirectory1GEntry;
- UINT64 *PageDirectoryEntry;
-
- //
- // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses
- // when 5-Level Paging is disabled.
- //
- ASSERT (PhysicalAddressBits <= 52);
- if (!m5LevelPagingNeeded && (PhysicalAddressBits > 48)) {
- PhysicalAddressBits = 48;
- }
-
- NumberOfPml5EntriesNeeded = 1;
- if (PhysicalAddressBits > 48) {
- NumberOfPml5EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits - 48);
- PhysicalAddressBits = 48;
- }
-
- NumberOfPml4EntriesNeeded = 1;
- if (PhysicalAddressBits > 39) {
- NumberOfPml4EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits - 39);
- PhysicalAddressBits = 39;
- }
-
- NumberOfPdpEntriesNeeded = 1;
- ASSERT (PhysicalAddressBits > 30);
- NumberOfPdpEntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits - 30);
-
- //
- // By architecture only one PageMapLevel4 exists - so lets allocate storage for it.
- //
- PageMap = (VOID *)PageTable;
-
- PageMapLevel4Entry = PageMap;
- PageMapLevel5Entry = NULL;
- if (m5LevelPagingNeeded) {
- //
- // By architecture only one PageMapLevel5 exists - so lets allocate storage for it.
- //
- PageMapLevel5Entry = PageMap;
- }
-
- PageAddress = 0;
-
- for ( IndexOfPml5Entries = 0
- ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded
- ; IndexOfPml5Entries++, PageMapLevel5Entry++)
- {
- //
- // Each PML5 entry points to a page of PML4 entires.
- // So lets allocate space for them and fill them in in the IndexOfPml4Entries loop.
- // When 5-Level Paging is disabled, below allocation happens only once.
- //
- if (m5LevelPagingNeeded) {
- PageMapLevel4Entry = (UINT64 *)((*PageMapLevel5Entry) & ~mAddressEncMask & gPhyMask);
- if (PageMapLevel4Entry == NULL) {
- PageMapLevel4Entry = AllocatePageTableMemory (1);
- ASSERT (PageMapLevel4Entry != NULL);
- ZeroMem (PageMapLevel4Entry, EFI_PAGES_TO_SIZE (1));
-
- *PageMapLevel5Entry = (UINT64)(UINTN)PageMapLevel4Entry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- }
- }
-
- for (IndexOfPml4Entries = 0; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512); IndexOfPml4Entries++, PageMapLevel4Entry++) {
- //
- // Each PML4 entry points to a page of Page Directory Pointer entries.
- //
- PageDirectoryPointerEntry = (UINT64 *)((*PageMapLevel4Entry) & ~mAddressEncMask & gPhyMask);
- if (PageDirectoryPointerEntry == NULL) {
- PageDirectoryPointerEntry = AllocatePageTableMemory (1);
- ASSERT (PageDirectoryPointerEntry != NULL);
- ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE (1));
-
- *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- }
-
- if (m1GPageTableSupport) {
- PageDirectory1GEntry = PageDirectoryPointerEntry;
- for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += SIZE_1GB) {
- if ((IndexOfPml4Entries == 0) && (IndexOfPageDirectoryEntries < 4)) {
- //
- // Skip the < 4G entries
- //
- continue;
- }
-
- //
- // Fill in the Page Directory entries
- //
- *PageDirectory1GEntry = PageAddress | mAddressEncMask | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
- }
- } else {
- PageAddress = BASE_4GB;
- for (IndexOfPdpEntries = 0; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512); IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
- if ((IndexOfPml4Entries == 0) && (IndexOfPdpEntries < 4)) {
- //
- // Skip the < 4G entries
- //
- continue;
- }
-
- //
- // Each Directory Pointer entries points to a page of Page Directory entires.
- // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop.
- //
- PageDirectoryEntry = (UINT64 *)((*PageDirectoryPointerEntry) & ~mAddressEncMask & gPhyMask);
- if (PageDirectoryEntry == NULL) {
- PageDirectoryEntry = AllocatePageTableMemory (1);
- ASSERT (PageDirectoryEntry != NULL);
- ZeroMem (PageDirectoryEntry, EFI_PAGES_TO_SIZE (1));
-
- //
- // Fill in a Page Directory Pointer Entries
- //
- *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- }
-
- for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) {
- //
- // Fill in the Page Directory entries
- //
- *PageDirectoryEntry = PageAddress | mAddressEncMask | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
- }
- }
- }
- }
- }
-}
-
/**
Create PageTable for SMM use.
@@ -332,15 +178,16 @@ SmmInitPageTable (
VOID
)
{
- EFI_PHYSICAL_ADDRESS Pages;
- UINT64 *PTEntry;
+ UINTN PageTable;
LIST_ENTRY *FreePage;
UINTN Index;
UINTN PageFaultHandlerHookAddress;
IA32_IDT_GATE_DESCRIPTOR *IdtEntry;
EFI_STATUS Status;
+ UINT64 *PdptEntry;
UINT64 *Pml4Entry;
UINT64 *Pml5Entry;
+ UINT8 PhysicalAddressBits;
//
// Initialize spin lock
@@ -357,59 +204,44 @@ SmmInitPageTable (
} else {
mPagingMode = m1GPageTableSupport ? Paging4Level1GB : Paging4Level;
}
+
DEBUG ((DEBUG_INFO, "5LevelPaging Needed - %d\n", m5LevelPagingNeeded));
DEBUG ((DEBUG_INFO, "1GPageTable Support - %d\n", m1GPageTableSupport));
DEBUG ((DEBUG_INFO, "PcdCpuSmmRestrictedMemoryAccess - %d\n", mCpuSmmRestrictedMemoryAccess));
DEBUG ((DEBUG_INFO, "PhysicalAddressBits - %d\n", mPhysicalAddressBits));
- //
- // Generate PAE page table for the first 4GB memory space
- //
- Pages = Gen4GPageTable (FALSE);
//
- // Set IA32_PG_PMNT bit to mask this entry
+ // Generate initial SMM page table.
+ // Only map [0, 4G] when PcdCpuSmmRestrictedMemoryAccess is FALSE.
//
- PTEntry = (UINT64 *)(UINTN)Pages;
- for (Index = 0; Index < 4; Index++) {
- PTEntry[Index] |= IA32_PG_PMNT;
- }
-
- //
- // Fill Page-Table-Level4 (PML4) entry
- //
- Pml4Entry = (UINT64 *)AllocatePageTableMemory (1);
- ASSERT (Pml4Entry != NULL);
- *Pml4Entry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- ZeroMem (Pml4Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml4Entry));
-
- //
- // Set sub-entries number
- //
- SetSubEntriesNum (Pml4Entry, 3);
- PTEntry = Pml4Entry;
+ PhysicalAddressBits = mCpuSmmRestrictedMemoryAccess ? mPhysicalAddressBits : 32;
+ PageTable = GenSmmPageTable (mPagingMode, PhysicalAddressBits);
if (m5LevelPagingNeeded) {
+ Pml5Entry = (UINT64 *)PageTable;
//
- // Fill PML5 entry
- //
- Pml5Entry = (UINT64 *)AllocatePageTableMemory (1);
- ASSERT (Pml5Entry != NULL);
- *Pml5Entry = (UINTN)Pml4Entry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- ZeroMem (Pml5Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml5Entry));
- //
- // Set sub-entries number
+ // Set Pml5Entry sub-entries number for smm PF handler usage.
//
SetSubEntriesNum (Pml5Entry, 1);
- PTEntry = Pml5Entry;
+ Pml4Entry = (UINT64 *)((*Pml5Entry) & ~mAddressEncMask & gPhyMask);
+ } else {
+ Pml4Entry = (UINT64 *)PageTable;
+ }
+
+ //
+ // Set IA32_PG_PMNT bit to mask first 4 PdptEntry.
+ //
+ PdptEntry = (UINT64 *)((*Pml4Entry) & ~mAddressEncMask & gPhyMask);
+ for (Index = 0; Index < 4; Index++) {
+ PdptEntry[Index] |= IA32_PG_PMNT;
}
- if (mCpuSmmRestrictedMemoryAccess) {
+ if (!mCpuSmmRestrictedMemoryAccess) {
//
- // When access to non-SMRAM memory is restricted, create page table
- // that covers all memory space.
+ // Set Pml4Entry sub-entries number for smm PF handler usage.
//
- SetStaticPageTable ((UINTN)PTEntry, mPhysicalAddressBits);
- } else {
+ SetSubEntriesNum (Pml4Entry, 3);
+
//
// Add pages to page pool
//
@@ -466,7 +298,7 @@ SmmInitPageTable (
//
// Return the address of PML4/PML5 (to set CR3)
//
- return (UINT32)(UINTN)PTEntry;
+ return (UINT32)PageTable;
}
/**
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 11/15] UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 page table
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (9 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table duntan
@ 2023-05-16 9:59 ` duntan
2023-06-02 3:31 ` [edk2-devel] " Ni, Ray
2023-05-16 9:59 ` [Patch V4 12/15] UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfo duntan
` (3 subsequent siblings)
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
Use GenSmmPageTable() to create both IA32 and X64 Smm S3
page table. Then Gen4GPageTable() is not used any more and
delete it.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c | 2 +-
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 130 ----------------------------------------------------------------------------------------------------------------------------------
UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c | 19 ++-----------------
3 files changed, 3 insertions(+), 148 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c
index bba4a6f058..650090e534 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmProfileArch.c
@@ -18,7 +18,7 @@ InitSmmS3Cr3 (
VOID
)
{
- mSmmS3ResumeState->SmmS3Cr3 = Gen4GPageTable (TRUE);
+ mSmmS3ResumeState->SmmS3Cr3 = GenSmmPageTable (PagingPae, mPhysicalAddressBits);
return;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
index 1878252eac..f8b81fc96e 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
@@ -999,136 +999,6 @@ APHandler (
ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
}
-/**
- Create 4G PageTable in SMRAM.
-
- @param[in] Is32BitPageTable Whether the page table is 32-bit PAE
- @return PageTable Address
-
-**/
-UINT32
-Gen4GPageTable (
- IN BOOLEAN Is32BitPageTable
- )
-{
- VOID *PageTable;
- UINTN Index;
- UINT64 *Pte;
- UINTN PagesNeeded;
- UINTN Low2MBoundary;
- UINTN High2MBoundary;
- UINTN Pages;
- UINTN GuardPage;
- UINT64 *Pdpte;
- UINTN PageIndex;
- UINTN PageAddress;
-
- Low2MBoundary = 0;
- High2MBoundary = 0;
- PagesNeeded = 0;
- if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
- //
- // Add one more page for known good stack, then find the lower 2MB aligned address.
- //
- Low2MBoundary = (mSmmStackArrayBase + EFI_PAGE_SIZE) & ~(SIZE_2MB-1);
- //
- // Add two more pages for known good stack and stack guard page,
- // then find the lower 2MB aligned address.
- //
- High2MBoundary = (mSmmStackArrayEnd - mSmmStackSize - mSmmShadowStackSize + EFI_PAGE_SIZE * 2) & ~(SIZE_2MB-1);
- PagesNeeded = ((High2MBoundary - Low2MBoundary) / SIZE_2MB) + 1;
- }
-
- //
- // Allocate the page table
- //
- PageTable = AllocatePageTableMemory (5 + PagesNeeded);
- ASSERT (PageTable != NULL);
-
- PageTable = (VOID *)((UINTN)PageTable);
- Pte = (UINT64 *)PageTable;
-
- //
- // Zero out all page table entries first
- //
- ZeroMem (Pte, EFI_PAGES_TO_SIZE (1));
-
- //
- // Set Page Directory Pointers
- //
- for (Index = 0; Index < 4; Index++) {
- Pte[Index] = ((UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1)) | mAddressEncMask |
- (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS : PAGE_ATTRIBUTE_BITS);
- }
-
- Pte += EFI_PAGE_SIZE / sizeof (*Pte);
-
- //
- // Fill in Page Directory Entries
- //
- for (Index = 0; Index < EFI_PAGE_SIZE * 4 / sizeof (*Pte); Index++) {
- Pte[Index] = (Index << 21) | mAddressEncMask | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
- }
-
- Pdpte = (UINT64 *)PageTable;
- if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
- Pages = (UINTN)PageTable + EFI_PAGES_TO_SIZE (5);
- GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE;
- for (PageIndex = Low2MBoundary; PageIndex <= High2MBoundary; PageIndex += SIZE_2MB) {
- Pte = (UINT64 *)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30, 31)] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
- Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- //
- // Fill in Page Table Entries
- //
- Pte = (UINT64 *)Pages;
- PageAddress = PageIndex;
- for (Index = 0; Index < EFI_PAGE_SIZE / sizeof (*Pte); Index++) {
- if (PageAddress == GuardPage) {
- //
- // Mark the guard page as non-present
- //
- Pte[Index] = PageAddress | mAddressEncMask;
- GuardPage += (mSmmStackSize + mSmmShadowStackSize);
- if (GuardPage > mSmmStackArrayEnd) {
- GuardPage = 0;
- }
- } else {
- Pte[Index] = PageAddress | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- }
-
- PageAddress += EFI_PAGE_SIZE;
- }
-
- Pages += EFI_PAGE_SIZE;
- }
- }
-
- if ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) {
- Pte = (UINT64 *)(UINTN)(Pdpte[0] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
- if ((Pte[0] & IA32_PG_PS) == 0) {
- // 4K-page entries are already mapped. Just hide the first one anyway.
- Pte = (UINT64 *)(UINTN)(Pte[0] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
- Pte[0] &= ~(UINT64)IA32_PG_P; // Hide page 0
- } else {
- // Create 4K-page entries
- Pages = (UINTN)AllocatePageTableMemory (1);
- ASSERT (Pages != 0);
-
- Pte[0] = (UINT64)(Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS);
-
- Pte = (UINT64 *)Pages;
- PageAddress = 0;
- Pte[0] = PageAddress | mAddressEncMask; // Hide page 0 but present left
- for (Index = 1; Index < EFI_PAGE_SIZE / sizeof (*Pte); Index++) {
- PageAddress += EFI_PAGE_SIZE;
- Pte[Index] = PageAddress | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- }
- }
- }
-
- return (UINT32)(UINTN)PageTable;
-}
-
/**
Checks whether the input token is the current used token.
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
index cb7a691745..0805b2e780 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
@@ -35,26 +35,11 @@ InitSmmS3Cr3 (
VOID
)
{
- EFI_PHYSICAL_ADDRESS Pages;
- UINT64 *PTEntry;
-
- //
- // Generate PAE page table for the first 4GB memory space
- //
- Pages = Gen4GPageTable (FALSE);
-
- //
- // Fill Page-Table-Level4 (PML4) entry
- //
- PTEntry = (UINT64 *)AllocatePageTableMemory (1);
- ASSERT (PTEntry != NULL);
- *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
-
//
+ // Generate level4 page table for the first 4GB memory space
// Return the address of PML4 (to set CR3)
//
- mSmmS3ResumeState->SmmS3Cr3 = (UINT32)(UINTN)PTEntry;
+ mSmmS3ResumeState->SmmS3Cr3 = (UINT32)GenSmmPageTable (Paging4Level, 32);
return;
}
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 12/15] UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfo
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (10 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 11/15] UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 " duntan
@ 2023-05-16 9:59 ` duntan
2023-06-02 3:33 ` [edk2-devel] " Ni, Ray
2023-05-16 9:59 ` [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock duntan
` (2 subsequent siblings)
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
Sort mSmmCpuSmramRanges after get the SMRAM info in
FindSmramInfo() function.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
index c0e368ea94..d69e976269 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
@@ -1197,6 +1197,32 @@ PiCpuSmmEntry (
return EFI_SUCCESS;
}
+/**
+ Function to compare 2 EFI_SMRAM_DESCRIPTOR based on CpuStart.
+
+ @param[in] Buffer1 pointer to Device Path poiner to compare
+ @param[in] Buffer2 pointer to second DevicePath pointer to compare
+
+ @retval 0 Buffer1 equal to Buffer2
+ @retval <0 Buffer1 is less than Buffer2
+ @retval >0 Buffer1 is greater than Buffer2
+**/
+INTN
+EFIAPI
+CpuSmramRangeCompare (
+ IN CONST VOID *Buffer1,
+ IN CONST VOID *Buffer2
+ )
+{
+ if (((EFI_SMRAM_DESCRIPTOR *)Buffer1)->CpuStart > ((EFI_SMRAM_DESCRIPTOR *)Buffer2)->CpuStart) {
+ return 1;
+ } else if (((EFI_SMRAM_DESCRIPTOR *)Buffer1)->CpuStart < ((EFI_SMRAM_DESCRIPTOR *)Buffer2)->CpuStart) {
+ return -1;
+ }
+
+ return 0;
+}
+
/**
Find out SMRAM information including SMRR base and SMRR size.
@@ -1218,6 +1244,7 @@ FindSmramInfo (
UINTN Index;
UINT64 MaxSize;
BOOLEAN Found;
+ VOID *Buffer;
//
// Get SMM Access Protocol
@@ -1240,6 +1267,14 @@ FindSmramInfo (
mSmmCpuSmramRangeCount = Size / sizeof (EFI_SMRAM_DESCRIPTOR);
+ //
+ // Sort the mSmmCpuSmramRanges
+ //
+ Buffer = AllocateZeroPool (sizeof (EFI_SMRAM_DESCRIPTOR));
+ ASSERT (Buffer != NULL);
+ QuickSort (mSmmCpuSmramRanges, mSmmCpuSmramRangeCount, sizeof (EFI_SMRAM_DESCRIPTOR), (BASE_SORT_COMPARE)CpuSmramRangeCompare, Buffer);
+ FreePool (Buffer);
+
//
// Find the largest SMRAM range between 1MB and 4GB that is at least 256K - 4K in size
//
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (11 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 12/15] UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfo duntan
@ 2023-05-16 9:59 ` duntan
2023-06-02 3:34 ` Ni, Ray
2023-05-16 9:59 ` [Patch V4 14/15] UefiCpuPkg: Refinement to smm runtime InitPaging() code duntan
2023-05-16 9:59 ` [Patch V4 15/15] UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary function duntan
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
Sort mProtectionMemRange in InitProtectedMemRange() when
ReadyToLock.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
index 5625ba0cac..b298e2fb99 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
@@ -375,6 +375,32 @@ IsAddressSplit (
return FALSE;
}
+/**
+ Function to compare 2 MEMORY_PROTECTION_RANGE based on range base.
+
+ @param[in] Buffer1 pointer to Device Path poiner to compare
+ @param[in] Buffer2 pointer to second DevicePath pointer to compare
+
+ @retval 0 Buffer1 equal to Buffer2
+ @retval <0 Buffer1 is less than Buffer2
+ @retval >0 Buffer1 is greater than Buffer2
+**/
+INTN
+EFIAPI
+ProtectionRangeCompare (
+ IN CONST VOID *Buffer1,
+ IN CONST VOID *Buffer2
+ )
+{
+ if (((MEMORY_PROTECTION_RANGE *)Buffer1)->Range.Base > ((MEMORY_PROTECTION_RANGE *)Buffer2)->Range.Base) {
+ return 1;
+ } else if (((MEMORY_PROTECTION_RANGE *)Buffer1)->Range.Base < ((MEMORY_PROTECTION_RANGE *)Buffer2)->Range.Base) {
+ return -1;
+ }
+
+ return 0;
+}
+
/**
Initialize the protected memory ranges and the 4KB-page mapped memory ranges.
@@ -397,6 +423,7 @@ InitProtectedMemRange (
EFI_PHYSICAL_ADDRESS Base2MBAlignedAddress;
UINT64 High4KBPageSize;
UINT64 Low4KBPageSize;
+ VOID *Buffer;
NumberOfDescriptors = 0;
NumberOfAddedDescriptors = mSmmCpuSmramRangeCount;
@@ -533,6 +560,14 @@ InitProtectedMemRange (
mSplitMemRangeCount = NumberOfSpliteRange;
+ //
+ // Sort the mProtectionMemRange
+ //
+ Buffer = AllocateZeroPool (sizeof (MEMORY_PROTECTION_RANGE));
+ ASSERT (Buffer != NULL);
+ QuickSort (mProtectionMemRange, mProtectionMemRangeCount, sizeof (MEMORY_PROTECTION_RANGE), (BASE_SORT_COMPARE)ProtectionRangeCompare, Buffer);
+ FreePool (Buffer);
+
DEBUG ((DEBUG_INFO, "SMM Profile Memory Ranges:\n"));
for (Index = 0; Index < mProtectionMemRangeCount; Index++) {
DEBUG ((DEBUG_INFO, "mProtectionMemRange[%d].Base = %lx\n", Index, mProtectionMemRange[Index].Range.Base));
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 14/15] UefiCpuPkg: Refinement to smm runtime InitPaging() code
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (12 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock duntan
@ 2023-05-16 9:59 ` duntan
2023-06-02 3:54 ` [edk2-devel] " Ni, Ray
2023-05-16 9:59 ` [Patch V4 15/15] UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary function duntan
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
This commit is code refinement to current smm runtime InitPaging()
page table update code. In InitPaging(), if PcdCpuSmmProfileEnable
is TRUE, use ConvertMemoryPageAttributes() API to map the range in
mProtectionMemRange to the attrbute recorded in the attribute field
of mProtectionMemRange, map the range outside mProtectionMemRange
as non-present. If PcdCpuSmmProfileEnable is FALSE, only need to
set the ranges not in mSmmCpuSmramRanges as NX.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 37 +++++++++++++++++++++++++++++++++++++
UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 293 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2 files changed, 101 insertions(+), 229 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index 5399659bc0..12ad86028e 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -725,6 +725,43 @@ SmmBlockingStartupThisAp (
IN OUT VOID *ProcArguments OPTIONAL
);
+/**
+ This function modifies the page attributes for the memory region specified by BaseAddress and
+ Length from their current attributes to the attributes specified by Attributes.
+
+ Caller should make sure BaseAddress and Length is at page boundary.
+
+ @param[in] PageTableBase The page table base.
+ @param[in] BaseAddress The physical address that is the start address of a memory region.
+ @param[in] Length The size in bytes of the memory region.
+ @param[in] Attributes The bit mask of attributes to modify for the memory region.
+ @param[in] IsSet TRUE means to set attributes. FALSE means to clear attributes.
+ @param[out] IsModified TRUE means page table modified. FALSE means page table not modified.
+
+ @retval RETURN_SUCCESS The attributes were modified for the memory region.
+ @retval RETURN_ACCESS_DENIED The attributes for the memory resource range specified by
+ BaseAddress and Length cannot be modified.
+ @retval RETURN_INVALID_PARAMETER Length is zero.
+ Attributes specified an illegal combination of attributes that
+ cannot be set together.
+ @retval RETURN_OUT_OF_RESOURCES There are not enough system resources to modify the attributes of
+ the memory resource range.
+ @retval RETURN_UNSUPPORTED The processor does not support one or more bytes of the memory
+ resource range specified by BaseAddress and Length.
+ The bit mask of attributes is not support for the memory resource
+ range specified by BaseAddress and Length.
+**/
+RETURN_STATUS
+ConvertMemoryPageAttributes (
+ IN UINTN PageTableBase,
+ IN PAGING_MODE PagingMode,
+ IN PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 Attributes,
+ IN BOOLEAN IsSet,
+ OUT BOOLEAN *IsModified OPTIONAL
+ );
+
/**
This function sets the attributes for the memory region specified by BaseAddress and
Length from their current attributes to the attributes specified by Attributes.
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
index b298e2fb99..0b117b7b7b 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
@@ -589,254 +589,89 @@ InitPaging (
VOID
)
{
- UINT64 Pml5Entry;
- UINT64 Pml4Entry;
- UINT64 *Pml5;
- UINT64 *Pml4;
- UINT64 *Pdpt;
- UINT64 *Pd;
- UINT64 *Pt;
- UINTN Address;
- UINTN Pml5Index;
- UINTN Pml4Index;
- UINTN PdptIndex;
- UINTN PdIndex;
- UINTN PtIndex;
- UINTN NumberOfPdptEntries;
- UINTN NumberOfPml4Entries;
- UINTN NumberOfPml5Entries;
- UINTN SizeOfMemorySpace;
- BOOLEAN Nx;
- IA32_CR4 Cr4;
- BOOLEAN Enable5LevelPaging;
- BOOLEAN WpEnabled;
- BOOLEAN CetEnabled;
-
- Cr4.UintN = AsmReadCr4 ();
- Enable5LevelPaging = (BOOLEAN)(Cr4.Bits.LA57 == 1);
-
- if (sizeof (UINTN) == sizeof (UINT64)) {
- if (!Enable5LevelPaging) {
- Pml5Entry = (UINTN)mSmmProfileCr3 | IA32_PG_P;
- Pml5 = &Pml5Entry;
- } else {
- Pml5 = (UINT64 *)(UINTN)mSmmProfileCr3;
- }
-
- SizeOfMemorySpace = HighBitSet64 (gPhyMask) + 1;
- ASSERT (SizeOfMemorySpace <= 52);
-
- //
- // Calculate the table entries of PML5E, PML4E and PDPTE.
- //
- NumberOfPml5Entries = 1;
- if (SizeOfMemorySpace > 48) {
- if (Enable5LevelPaging) {
- NumberOfPml5Entries = (UINTN)LShiftU64 (1, SizeOfMemorySpace - 48);
- }
-
- SizeOfMemorySpace = 48;
- }
-
- NumberOfPml4Entries = 1;
- if (SizeOfMemorySpace > 39) {
- NumberOfPml4Entries = (UINTN)LShiftU64 (1, SizeOfMemorySpace - 39);
- SizeOfMemorySpace = 39;
- }
-
- NumberOfPdptEntries = 1;
- ASSERT (SizeOfMemorySpace > 30);
- NumberOfPdptEntries = (UINTN)LShiftU64 (1, SizeOfMemorySpace - 30);
+ RETURN_STATUS Status;
+ UINTN Index;
+ UINTN PageTable;
+ UINT64 Base;
+ UINT64 Length;
+ UINT64 Limit;
+ UINT64 PreviousAddress;
+ UINT64 MemoryAttrMask;
+ BOOLEAN WpEnabled;
+ BOOLEAN CetEnabled;
+
+ PageTable = AsmReadCr3 ();
+ if (sizeof (UINTN) == sizeof (UINT32)) {
+ Limit = BASE_4GB;
} else {
- Pml4Entry = (UINTN)mSmmProfileCr3 | IA32_PG_P;
- Pml4 = &Pml4Entry;
- Pml5Entry = (UINTN)Pml4 | IA32_PG_P;
- Pml5 = &Pml5Entry;
- NumberOfPml5Entries = 1;
- NumberOfPml4Entries = 1;
- NumberOfPdptEntries = 4;
+ Limit = (IsRestrictedMemoryAccess ()) ? LShiftU64 (1, mPhysicalAddressBits) : BASE_4GB;
}
DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
//
- // Go through page table and change 2MB-page into 4KB-page.
+ // [0, 4k] may be non-present.
//
- for (Pml5Index = 0; Pml5Index < NumberOfPml5Entries; Pml5Index++) {
- if ((Pml5[Pml5Index] & IA32_PG_P) == 0) {
- //
- // If PML5 entry does not exist, skip it
- //
- continue;
- }
+ PreviousAddress = ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) ? BASE_4KB : 0;
- Pml4 = (UINT64 *)(UINTN)(Pml5[Pml5Index] & PHYSICAL_ADDRESS_MASK);
- for (Pml4Index = 0; Pml4Index < NumberOfPml4Entries; Pml4Index++) {
- if ((Pml4[Pml4Index] & IA32_PG_P) == 0) {
- //
- // If PML4 entry does not exist, skip it
- //
- continue;
+ DEBUG ((DEBUG_INFO, "Patch page table start ...\n"));
+ if (FeaturePcdGet (PcdCpuSmmProfileEnable)) {
+ for (Index = 0; Index < mProtectionMemRangeCount; Index++) {
+ MemoryAttrMask = 0;
+ if ((mProtectionMemRange[Index].Nx == TRUE) && mXdSupported) {
+ MemoryAttrMask |= EFI_MEMORY_XP;
}
- Pdpt = (UINT64 *)(UINTN)(Pml4[Pml4Index] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
- for (PdptIndex = 0; PdptIndex < NumberOfPdptEntries; PdptIndex++, Pdpt++) {
- if ((*Pdpt & IA32_PG_P) == 0) {
- //
- // If PDPT entry does not exist, skip it
- //
- continue;
- }
-
- if ((*Pdpt & IA32_PG_PS) != 0) {
- //
- // This is 1G entry, skip it
- //
- continue;
- }
-
- Pd = (UINT64 *)(UINTN)(*Pdpt & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
- if (Pd == 0) {
- continue;
- }
-
- for (PdIndex = 0; PdIndex < SIZE_4KB / sizeof (*Pd); PdIndex++, Pd++) {
- if ((*Pd & IA32_PG_P) == 0) {
- //
- // If PD entry does not exist, skip it
- //
- continue;
- }
-
- Address = (UINTN)LShiftU64 (
- LShiftU64 (
- LShiftU64 ((Pml5Index << 9) + Pml4Index, 9) + PdptIndex,
- 9
- ) + PdIndex,
- 21
- );
-
- //
- // If it is 2M page, check IsAddressSplit()
- //
- if (((*Pd & IA32_PG_PS) != 0) && IsAddressSplit (Address)) {
- //
- // Based on current page table, create 4KB page table for split area.
- //
- ASSERT (Address == (*Pd & PHYSICAL_ADDRESS_MASK));
-
- Pt = AllocatePageTableMemory (1);
- ASSERT (Pt != NULL);
+ if (mProtectionMemRange[Index].Present == FALSE) {
+ MemoryAttrMask = EFI_MEMORY_RP;
+ }
- // Split it
- for (PtIndex = 0; PtIndex < SIZE_4KB / sizeof (*Pt); PtIndex++) {
- Pt[PtIndex] = Address + ((PtIndex << 12) | mAddressEncMask | PAGE_ATTRIBUTE_BITS);
- } // end for PT
+ Base = mProtectionMemRange[Index].Range.Base;
+ Length = mProtectionMemRange[Index].Range.Top - Base;
+ if (MemoryAttrMask != 0) {
+ Status = ConvertMemoryPageAttributes (PageTable, mPagingMode, Base, Length, MemoryAttrMask, TRUE, NULL);
+ ASSERT_RETURN_ERROR (Status);
+ }
- *Pd = (UINT64)(UINTN)Pt | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
- } // end if IsAddressSplit
- } // end for PD
- } // end for PDPT
- } // end for PML4
- } // end for PML5
+ if (Base > PreviousAddress) {
+ //
+ // Mark the ranges not in mProtectionMemRange as non-present.
+ //
+ MemoryAttrMask = EFI_MEMORY_RP;
+ Status = ConvertMemoryPageAttributes (PageTable, mPagingMode, PreviousAddress, Base - PreviousAddress, MemoryAttrMask, TRUE, NULL);
+ ASSERT_RETURN_ERROR (Status);
+ }
- //
- // Go through page table and set several page table entries to absent or execute-disable.
- //
- DEBUG ((DEBUG_INFO, "Patch page table start ...\n"));
- for (Pml5Index = 0; Pml5Index < NumberOfPml5Entries; Pml5Index++) {
- if ((Pml5[Pml5Index] & IA32_PG_P) == 0) {
- //
- // If PML5 entry does not exist, skip it
- //
- continue;
+ PreviousAddress = Base + Length;
}
- Pml4 = (UINT64 *)(UINTN)(Pml5[Pml5Index] & PHYSICAL_ADDRESS_MASK);
- for (Pml4Index = 0; Pml4Index < NumberOfPml4Entries; Pml4Index++) {
- if ((Pml4[Pml4Index] & IA32_PG_P) == 0) {
+ //
+ // This assignment is for setting the last remaining range
+ //
+ MemoryAttrMask = EFI_MEMORY_RP;
+ } else {
+ MemoryAttrMask = EFI_MEMORY_XP;
+ for (Index = 0; Index < mSmmCpuSmramRangeCount; Index++) {
+ Base = mSmmCpuSmramRanges[Index].CpuStart;
+ if ((Base > PreviousAddress) && mXdSupported) {
//
- // If PML4 entry does not exist, skip it
+ // Mark the ranges not in mSmmCpuSmramRanges as NX.
//
- continue;
+ Status = ConvertMemoryPageAttributes (PageTable, mPagingMode, PreviousAddress, Base - PreviousAddress, MemoryAttrMask, TRUE, NULL);
+ ASSERT_RETURN_ERROR (Status);
}
- Pdpt = (UINT64 *)(UINTN)(Pml4[Pml4Index] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
- for (PdptIndex = 0; PdptIndex < NumberOfPdptEntries; PdptIndex++, Pdpt++) {
- if ((*Pdpt & IA32_PG_P) == 0) {
- //
- // If PDPT entry does not exist, skip it
- //
- continue;
- }
-
- if ((*Pdpt & IA32_PG_PS) != 0) {
- //
- // This is 1G entry, set NX bit and skip it
- //
- if (mXdSupported) {
- *Pdpt = *Pdpt | IA32_PG_NX;
- }
-
- continue;
- }
-
- Pd = (UINT64 *)(UINTN)(*Pdpt & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
- if (Pd == 0) {
- continue;
- }
+ PreviousAddress = mSmmCpuSmramRanges[Index].CpuStart + mSmmCpuSmramRanges[Index].PhysicalSize;
+ }
+ }
- for (PdIndex = 0; PdIndex < SIZE_4KB / sizeof (*Pd); PdIndex++, Pd++) {
- if ((*Pd & IA32_PG_P) == 0) {
- //
- // If PD entry does not exist, skip it
- //
- continue;
- }
-
- Address = (UINTN)LShiftU64 (
- LShiftU64 (
- LShiftU64 ((Pml5Index << 9) + Pml4Index, 9) + PdptIndex,
- 9
- ) + PdIndex,
- 21
- );
-
- if ((*Pd & IA32_PG_PS) != 0) {
- // 2MB page
-
- if (!IsAddressValid (Address, &Nx)) {
- //
- // Patch to remove Present flag and RW flag
- //
- *Pd = *Pd & (INTN)(INT32)(~PAGE_ATTRIBUTE_BITS);
- }
-
- if (Nx && mXdSupported) {
- *Pd = *Pd | IA32_PG_NX;
- }
- } else {
- // 4KB page
- Pt = (UINT64 *)(UINTN)(*Pd & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
- if (Pt == 0) {
- continue;
- }
-
- for (PtIndex = 0; PtIndex < SIZE_4KB / sizeof (*Pt); PtIndex++, Pt++) {
- if (!IsAddressValid (Address, &Nx)) {
- *Pt = *Pt & (INTN)(INT32)(~PAGE_ATTRIBUTE_BITS);
- }
-
- if (Nx && mXdSupported) {
- *Pt = *Pt | IA32_PG_NX;
- }
-
- Address += SIZE_4KB;
- } // end for PT
- } // end if PS
- } // end for PD
- } // end for PDPT
- } // end for PML4
- } // end for PML5
+ if (PreviousAddress < Limit) {
+ //
+ // Set the last remaining range to EFI_MEMORY_RP/EFI_MEMORY_XP.
+ // This path applies to both SmmProfile enable/disable case.
+ //
+ Status = ConvertMemoryPageAttributes (PageTable, mPagingMode, PreviousAddress, Limit - PreviousAddress, MemoryAttrMask, TRUE, NULL);
+ ASSERT_RETURN_ERROR (Status);
+ }
EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [Patch V4 15/15] UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary function
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
` (13 preceding siblings ...)
2023-05-16 9:59 ` [Patch V4 14/15] UefiCpuPkg: Refinement to smm runtime InitPaging() code duntan
@ 2023-05-16 9:59 ` duntan
2023-06-02 3:55 ` Ni, Ray
14 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-16 9:59 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
Remove unnecessary function SetNotPresentPage(). We can directly
use ConvertMemoryPageAttributes to set a range to non-present.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 8 ++++++--
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 16 ----------------
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 22 ----------------------
3 files changed, 6 insertions(+), 40 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
index d69e976269..7fa1867b63 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
@@ -1074,10 +1074,14 @@ PiCpuSmmEntry (
mSmmShadowStackSize
);
if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
- SetNotPresentPage (
+ ConvertMemoryPageAttributes (
Cr3,
+ mPagingMode,
(EFI_PHYSICAL_ADDRESS)(UINTN)Stacks + mSmmStackSize + EFI_PAGES_TO_SIZE (1) + (mSmmStackSize + mSmmShadowStackSize) * Index,
- EFI_PAGES_TO_SIZE (1)
+ EFI_PAGES_TO_SIZE (1),
+ EFI_MEMORY_RP,
+ TRUE,
+ NULL
);
}
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index 12ad86028e..0dc4d758cc 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -1247,22 +1247,6 @@ SetShadowStack (
IN UINT64 Length
);
-/**
- Set not present memory.
-
- @param[in] Cr3 The page table base address.
- @param[in] BaseAddress The physical address that is the start address of a memory region.
- @param[in] Length The size in bytes of the memory region.
-
- @retval EFI_SUCCESS The not present memory is set.
-**/
-EFI_STATUS
-SetNotPresentPage (
- IN UINTN Cr3,
- IN EFI_PHYSICAL_ADDRESS BaseAddress,
- IN UINT64 Length
- );
-
/**
Initialize the shadow stack related data structure.
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index 138ff43c9d..95de472ebf 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -752,28 +752,6 @@ SetShadowStack (
return Status;
}
-/**
- Set not present memory.
-
- @param[in] Cr3 The page table base address.
- @param[in] BaseAddress The physical address that is the start address of a memory region.
- @param[in] Length The size in bytes of the memory region.
-
- @retval EFI_SUCCESS The not present memory is set.
-**/
-EFI_STATUS
-SetNotPresentPage (
- IN UINTN Cr3,
- IN EFI_PHYSICAL_ADDRESS BaseAddress,
- IN UINT64 Length
- )
-{
- EFI_STATUS Status;
-
- Status = SmmSetMemoryAttributesEx (Cr3, mPagingMode, BaseAddress, Length, EFI_MEMORY_RP);
- return Status;
-}
-
/**
Retrieves a pointer to the system configuration table from the SMM System Table
based on a specified GUID.
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* Re: [Patch V4 02/15] UefiPayloadPkg: Add CpuPageTableLib required by PiSmmCpuDxe
2023-05-16 9:59 ` [Patch V4 02/15] UefiPayloadPkg: " duntan
@ 2023-05-16 10:01 ` Guo, Gua
0 siblings, 0 replies; 44+ messages in thread
From: Guo, Gua @ 2023-05-16 10:01 UTC (permalink / raw)
To: Tan, Dun, devel@edk2.groups.io
Cc: Dong, Guo, Ni, Ray, Rhodes, Sean, Lu, James
Reviewed-by: Gua Guo <gua.guo@intel.com>
-----Original Message-----
From: Tan, Dun <dun.tan@intel.com>
Sent: Tuesday, May 16, 2023 5:59 PM
To: devel@edk2.groups.io
Cc: Dong, Guo <guo.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Rhodes, Sean <sean@starlabs.systems>; Lu, James <james.lu@intel.com>; Guo, Gua <gua.guo@intel.com>
Subject: [Patch V4 02/15] UefiPayloadPkg: Add CpuPageTableLib required by PiSmmCpuDxe
Add CpuPageTableLib required by PiSmmCpuDxeSmm in UefiPayloadPkg.dsc.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
---
UefiPayloadPkg/UefiPayloadPkg.dsc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/UefiPayloadPkg/UefiPayloadPkg.dsc b/UefiPayloadPkg/UefiPayloadPkg.dsc
index 998d222909..66ec528ee6 100644
--- a/UefiPayloadPkg/UefiPayloadPkg.dsc
+++ b/UefiPayloadPkg/UefiPayloadPkg.dsc
@@ -206,6 +206,7 @@
OpensslLib|CryptoPkg/Library/OpensslLib/OpensslLib.inf
RngLib|MdePkg/Library/BaseRngLib/BaseRngLib.inf
HobLib|UefiPayloadPkg/Library/DxeHobLib/DxeHobLib.inf
+
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
#
# UEFI & PI
@@ -345,7 +346,6 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCpuExceptionHandlerLib.inf
- CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
!if $(PERFORMANCE_MEASUREMENT_ENABLE)
PerformanceLib|MdeModulePkg/Library/DxePerformanceLib/DxePerformanceLib.inf
--
2.31.1.windows.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 04/15] MdeModulePkg: Remove RO and NX protection when unset guard page
2023-05-16 9:59 ` [Patch V4 04/15] MdeModulePkg: Remove RO and NX protection when unset guard page duntan
@ 2023-05-16 19:04 ` Kun Qin
2023-05-17 10:16 ` duntan
0 siblings, 1 reply; 44+ messages in thread
From: Kun Qin @ 2023-05-16 19:04 UTC (permalink / raw)
To: devel, dun.tan; +Cc: Liming Gao, Ray Ni, Jian J Wang
Hi Dun,
I might have missed the context, but could you please explain why we
need to clear "EFI_MEMORY_XP"?
It is understandable that you would like to clear RO. But would it make
more sense to clear XP only when needed (i.e. code page allocation)?
Thanks,
Kun
On 5/16/2023 2:59 AM, duntan wrote:
> Remove RO and NX protection when unset guard page.
> When UnsetGuardPage(), remove all the memory attribute protection
> for guarded page.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Jian J Wang <jian.j.wang@intel.com>
> ---
> MdeModulePkg/Core/PiSmmCore/HeapGuard.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> index 8f3bab6fee..7daeeccf13 100644
> --- a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> +++ b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> @@ -553,7 +553,7 @@ UnsetGuardPage (
> mSmmMemoryAttribute,
> BaseAddress,
> EFI_PAGE_SIZE,
> - EFI_MEMORY_RP
> + EFI_MEMORY_RP|EFI_MEMORY_RO|EFI_MEMORY_XP
> );
> ASSERT_EFI_ERROR (Status);
> mOnGuarding = FALSE;
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 04/15] MdeModulePkg: Remove RO and NX protection when unset guard page
2023-05-16 19:04 ` [edk2-devel] " Kun Qin
@ 2023-05-17 10:16 ` duntan
0 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-05-17 10:16 UTC (permalink / raw)
To: Kun Qin, devel@edk2.groups.io; +Cc: Gao, Liming, Ni, Ray, Wang, Jian J
Hi Kun,
When we unset a guarded page to unguarded, from page table perspective, it's to set a non-present page to present.
I think it's reasonable that we should specific memory attribute to appropriate value when map non-present range to present. In smm initial page table, free page is set to present, writable and executable(with Present bit(bit0) and R/W bit(bit1) set to 1, XD(bit 63) set to 0). So that's why I also clear xp.
Thanks,
Dun
-----Original Message-----
From: Kun Qin <kuqin12@gmail.com>
Sent: Wednesday, May 17, 2023 3:05 AM
To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
Cc: Gao, Liming <gaoliming@byosoft.com.cn>; Ni, Ray <ray.ni@intel.com>; Wang, Jian J <jian.j.wang@intel.com>
Subject: Re: [edk2-devel] [Patch V4 04/15] MdeModulePkg: Remove RO and NX protection when unset guard page
Hi Dun,
I might have missed the context, but could you please explain why we need to clear "EFI_MEMORY_XP"?
It is understandable that you would like to clear RO. But would it make more sense to clear XP only when needed (i.e. code page allocation)?
Thanks,
Kun
On 5/16/2023 2:59 AM, duntan wrote:
> Remove RO and NX protection when unset guard page.
> When UnsetGuardPage(), remove all the memory attribute protection for
> guarded page.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Jian J Wang <jian.j.wang@intel.com>
> ---
> MdeModulePkg/Core/PiSmmCore/HeapGuard.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> index 8f3bab6fee..7daeeccf13 100644
> --- a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> +++ b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> @@ -553,7 +553,7 @@ UnsetGuardPage (
> mSmmMemoryAttribute,
> BaseAddress,
> EFI_PAGE_SIZE,
> - EFI_MEMORY_RP
> +
> + EFI_MEMORY_RP|EFI_MEMORY_RO|EFI_MEMORY_XP
> );
> ASSERT_EFI_ERROR (Status);
> mOnGuarding = FALSE;
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
2023-05-16 9:59 ` [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP duntan
@ 2023-05-20 2:00 ` Kun Qin
2023-05-23 9:14 ` duntan
2023-06-02 3:09 ` Ni, Ray
1 sibling, 1 reply; 44+ messages in thread
From: Kun Qin @ 2023-05-20 2:00 UTC (permalink / raw)
To: devel, dun.tan; +Cc: Eric Dong, Ray Ni, Rahul Kumar, Gerd Hoffmann
Hi Dun,
Thanks for the notice on the other thread (v4 04/15).
I have a few more questions on this specific patch (and a few associated
commits related to it):
Why would we allow page table manipulation after `mIsReadOnlyPageTable`
is evaluated to TRUE?
As far as I can tell, `mIsReadOnlyPageTable` is set to TRUE inside
`SetPageTableAttributes` function,
but then we also have code in `InitializePageTablePool` to expect more
page tables to be allocated.
Could you please let me when this would happen?
I thought there would not be any new page table memory (i.e. memory
attribute update) after ready
to lock with restricted memory access option? With these change, it
seems to be doable through
EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL now, is that correct? If so, would
you mind shedding
some light on what other behavior changes there might be?
In addition, I might miss it in the patch series. If the newly allocated
page memory is marked as read only
after the above flag is set to TRUE, how would the callers able to use them?
Thanks in advance.
Regards,
Kun
On 5/16/2023 2:59 AM, duntan wrote:
> Add two functions to disable/enable CR0.WP. These two unctions
> will also be used in later commits. This commit doesn't change any
> functionality.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 24 ++++++++++++++++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------------------------
> 2 files changed, 90 insertions(+), 49 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index ba341cadc6..e0c4ca76dc 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -1565,4 +1565,28 @@ SmmWaitForApArrival (
> VOID
> );
>
> +/**
> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
> +
> + @param[out] WpEnabled If Cr0.WP is enabled.
> + @param[out] CetEnabled If CET is enabled.
> +**/
> +VOID
> +DisableReadOnlyPageWriteProtect (
> + OUT BOOLEAN *WpEnabled,
> + OUT BOOLEAN *CetEnabled
> + );
> +
> +/**
> + Enable Write Protect on pages marked as read-only.
> +
> + @param[out] WpEnabled If Cr0.WP should be enabled.
> + @param[out] CetEnabled If CET should be enabled.
> +**/
> +VOID
> +EnableReadOnlyPageWriteProtect (
> + BOOLEAN WpEnabled,
> + BOOLEAN CetEnabled
> + );
> +
> #endif
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index 2faee8f859..4b512edf68 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -40,6 +40,64 @@ PAGE_TABLE_POOL *mPageTablePool = NULL;
> //
> BOOLEAN mIsReadOnlyPageTable = FALSE;
>
> +/**
> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
> +
> + @param[out] WpEnabled If Cr0.WP is enabled.
> + @param[out] CetEnabled If CET is enabled.
> +**/
> +VOID
> +DisableReadOnlyPageWriteProtect (
> + OUT BOOLEAN *WpEnabled,
> + OUT BOOLEAN *CetEnabled
> + )
> +{
> + IA32_CR0 Cr0;
> +
> + *CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
> + Cr0.UintN = AsmReadCr0 ();
> + *WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
> + if (*WpEnabled) {
> + if (*CetEnabled) {
> + //
> + // CET must be disabled if WP is disabled. Disable CET before clearing CR0.WP.
> + //
> + DisableCet ();
> + }
> +
> + Cr0.Bits.WP = 0;
> + AsmWriteCr0 (Cr0.UintN);
> + }
> +}
> +
> +/**
> + Enable Write Protect on pages marked as read-only.
> +
> + @param[out] WpEnabled If Cr0.WP should be enabled.
> + @param[out] CetEnabled If CET should be enabled.
> +**/
> +VOID
> +EnableReadOnlyPageWriteProtect (
> + BOOLEAN WpEnabled,
> + BOOLEAN CetEnabled
> + )
> +{
> + IA32_CR0 Cr0;
> +
> + if (WpEnabled) {
> + Cr0.UintN = AsmReadCr0 ();
> + Cr0.Bits.WP = 1;
> + AsmWriteCr0 (Cr0.UintN);
> +
> + if (CetEnabled) {
> + //
> + // re-enable CET.
> + //
> + EnableCet ();
> + }
> + }
> +}
> +
> /**
> Initialize a buffer pool for page table use only.
>
> @@ -62,10 +120,9 @@ InitializePageTablePool (
> IN UINTN PoolPages
> )
> {
> - VOID *Buffer;
> - BOOLEAN CetEnabled;
> - BOOLEAN WpEnabled;
> - IA32_CR0 Cr0;
> + VOID *Buffer;
> + BOOLEAN WpEnabled;
> + BOOLEAN CetEnabled;
>
> //
> // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one page for
> @@ -102,34 +159,9 @@ InitializePageTablePool (
> // If page table memory has been marked as RO, mark the new pool pages as read-only.
> //
> if (mIsReadOnlyPageTable) {
> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
> - Cr0.UintN = AsmReadCr0 ();
> - WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
> - if (WpEnabled) {
> - if (CetEnabled) {
> - //
> - // CET must be disabled if WP is disabled. Disable CET before clearing CR0.WP.
> - //
> - DisableCet ();
> - }
> -
> - Cr0.Bits.WP = 0;
> - AsmWriteCr0 (Cr0.UintN);
> - }
> -
> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
> SmmSetMemoryAttributes ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer, EFI_PAGES_TO_SIZE (PoolPages), EFI_MEMORY_RO);
> - if (WpEnabled) {
> - Cr0.UintN = AsmReadCr0 ();
> - Cr0.Bits.WP = 1;
> - AsmWriteCr0 (Cr0.UintN);
> -
> - if (CetEnabled) {
> - //
> - // re-enable CET.
> - //
> - EnableCet ();
> - }
> - }
> + EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
> }
>
> return TRUE;
> @@ -1782,6 +1814,7 @@ SetPageTableAttributes (
> VOID
> )
> {
> + BOOLEAN WpEnabled;
> BOOLEAN CetEnabled;
>
> if (!IfReadOnlyPageTableNeeded ()) {
> @@ -1794,15 +1827,7 @@ SetPageTableAttributes (
> // Disable write protection, because we need mark page table to be write protected.
> // We need *write* page table memory, to mark itself to be *read only*.
> //
> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
> - if (CetEnabled) {
> - //
> - // CET must be disabled if WP is disabled.
> - //
> - DisableCet ();
> - }
> -
> - AsmWriteCr0 (AsmReadCr0 () & ~CR0_WP);
> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
>
> // Set memory used by page table as Read Only.
> DEBUG ((DEBUG_INFO, "Start...\n"));
> @@ -1811,20 +1836,12 @@ SetPageTableAttributes (
> //
> // Enable write protection, after page table attribute updated.
> //
> - AsmWriteCr0 (AsmReadCr0 () | CR0_WP);
> + EnableReadOnlyPageWriteProtect (TRUE, CetEnabled);
> mIsReadOnlyPageTable = TRUE;
>
> //
> // Flush TLB after mark all page table pool as read only.
> //
> FlushTlbForAll ();
> -
> - if (CetEnabled) {
> - //
> - // re-enable CET.
> - //
> - EnableCet ();
> - }
> -
> return;
> }
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
2023-05-20 2:00 ` [edk2-devel] " Kun Qin
@ 2023-05-23 9:14 ` duntan
2023-05-24 18:39 ` Kun Qin
0 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-05-23 9:14 UTC (permalink / raw)
To: Kun Qin, devel@edk2.groups.io
Cc: Dong, Eric, Ni, Ray, Kumar, Rahul R, Gerd Hoffmann
Hi Kun,
I've updated my answers in your original mail.
Thanks,
Dun
-----Original Message-----
From: Kun Qin <kuqin12@gmail.com>
Sent: Saturday, May 20, 2023 10:00 AM
To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: Re: [edk2-devel] [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
Hi Dun,
Thanks for the notice on the other thread (v4 04/15).
I have a few more questions on this specific patch (and a few associated commits related to it):
Why would we allow page table manipulation after `mIsReadOnlyPageTable` is evaluated to TRUE?
Dun: `mIsReadOnlyPageTable` is a flag to indicate that current page table has been marked as RO and the new allocated pool should also be RO. We only need to clear Cr0.WP before modify page table.
As far as I can tell, `mIsReadOnlyPageTable` is set to TRUE inside `SetPageTableAttributes` function, but then we also have code in `InitializePageTablePool` to expect more page tables to be allocated.
Could you please let me when this would happen?
Dun: After `SetPageTableAttributes`, in 'SmmCpuFeaturesCompleteSmmReadyToLock()' API of different platform SmmCpuFeaturesLib, EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL may be used to convert memory attribute. Also, in SMI handler after ReadyToLock, EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL still may be used to convert memory attribute. During this process, if page split happens, new page table pool may be allocated.
I thought there would not be any new page table memory (i.e. memory attribute update) after ready to lock with restricted memory access option? With these change, it seems to be doable through EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL now, is that correct? If so, would you mind shedding some light on what other behavior changes there might be?
Dun: Do you mean that we should check if ReadyToLock in EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL implementation to make sure that page table won't be modified after ReadyToLock?
If is, as I mentioned above, page table still may be modified after ReadyToLock.
In addition, I might miss it in the patch series. If the newly allocated page memory is marked as read only after the above flag is set to TRUE, how would the callers able to use them?
Dun: Caller can clear the Cr0.WP before modifying the page table.
Thanks in advance.
Regards,
Kun
On 5/16/2023 2:59 AM, duntan wrote:
> Add two functions to disable/enable CR0.WP. These two unctions will
> also be used in later commits. This commit doesn't change any
> functionality.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 24 ++++++++++++++++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------------------------
> 2 files changed, 90 insertions(+), 49 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index ba341cadc6..e0c4ca76dc 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -1565,4 +1565,28 @@ SmmWaitForApArrival (
> VOID
> );
>
> +/**
> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
> +
> + @param[out] WpEnabled If Cr0.WP is enabled.
> + @param[out] CetEnabled If CET is enabled.
> +**/
> +VOID
> +DisableReadOnlyPageWriteProtect (
> + OUT BOOLEAN *WpEnabled,
> + OUT BOOLEAN *CetEnabled
> + );
> +
> +/**
> + Enable Write Protect on pages marked as read-only.
> +
> + @param[out] WpEnabled If Cr0.WP should be enabled.
> + @param[out] CetEnabled If CET should be enabled.
> +**/
> +VOID
> +EnableReadOnlyPageWriteProtect (
> + BOOLEAN WpEnabled,
> + BOOLEAN CetEnabled
> + );
> +
> #endif
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index 2faee8f859..4b512edf68 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -40,6 +40,64 @@ PAGE_TABLE_POOL *mPageTablePool = NULL;
> //
> BOOLEAN mIsReadOnlyPageTable = FALSE;
>
> +/**
> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
> +
> + @param[out] WpEnabled If Cr0.WP is enabled.
> + @param[out] CetEnabled If CET is enabled.
> +**/
> +VOID
> +DisableReadOnlyPageWriteProtect (
> + OUT BOOLEAN *WpEnabled,
> + OUT BOOLEAN *CetEnabled
> + )
> +{
> + IA32_CR0 Cr0;
> +
> + *CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
> + Cr0.UintN = AsmReadCr0 ();
> + *WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE; if (*WpEnabled) {
> + if (*CetEnabled) {
> + //
> + // CET must be disabled if WP is disabled. Disable CET before clearing CR0.WP.
> + //
> + DisableCet ();
> + }
> +
> + Cr0.Bits.WP = 0;
> + AsmWriteCr0 (Cr0.UintN);
> + }
> +}
> +
> +/**
> + Enable Write Protect on pages marked as read-only.
> +
> + @param[out] WpEnabled If Cr0.WP should be enabled.
> + @param[out] CetEnabled If CET should be enabled.
> +**/
> +VOID
> +EnableReadOnlyPageWriteProtect (
> + BOOLEAN WpEnabled,
> + BOOLEAN CetEnabled
> + )
> +{
> + IA32_CR0 Cr0;
> +
> + if (WpEnabled) {
> + Cr0.UintN = AsmReadCr0 ();
> + Cr0.Bits.WP = 1;
> + AsmWriteCr0 (Cr0.UintN);
> +
> + if (CetEnabled) {
> + //
> + // re-enable CET.
> + //
> + EnableCet ();
> + }
> + }
> +}
> +
> /**
> Initialize a buffer pool for page table use only.
>
> @@ -62,10 +120,9 @@ InitializePageTablePool (
> IN UINTN PoolPages
> )
> {
> - VOID *Buffer;
> - BOOLEAN CetEnabled;
> - BOOLEAN WpEnabled;
> - IA32_CR0 Cr0;
> + VOID *Buffer;
> + BOOLEAN WpEnabled;
> + BOOLEAN CetEnabled;
>
> //
> // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including
> one page for @@ -102,34 +159,9 @@ InitializePageTablePool (
> // If page table memory has been marked as RO, mark the new pool pages as read-only.
> //
> if (mIsReadOnlyPageTable) {
> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
> - Cr0.UintN = AsmReadCr0 ();
> - WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
> - if (WpEnabled) {
> - if (CetEnabled) {
> - //
> - // CET must be disabled if WP is disabled. Disable CET before clearing CR0.WP.
> - //
> - DisableCet ();
> - }
> -
> - Cr0.Bits.WP = 0;
> - AsmWriteCr0 (Cr0.UintN);
> - }
> -
> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
> SmmSetMemoryAttributes ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer, EFI_PAGES_TO_SIZE (PoolPages), EFI_MEMORY_RO);
> - if (WpEnabled) {
> - Cr0.UintN = AsmReadCr0 ();
> - Cr0.Bits.WP = 1;
> - AsmWriteCr0 (Cr0.UintN);
> -
> - if (CetEnabled) {
> - //
> - // re-enable CET.
> - //
> - EnableCet ();
> - }
> - }
> + EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
> }
>
> return TRUE;
> @@ -1782,6 +1814,7 @@ SetPageTableAttributes (
> VOID
> )
> {
> + BOOLEAN WpEnabled;
> BOOLEAN CetEnabled;
>
> if (!IfReadOnlyPageTableNeeded ()) { @@ -1794,15 +1827,7 @@
> SetPageTableAttributes (
> // Disable write protection, because we need mark page table to be write protected.
> // We need *write* page table memory, to mark itself to be *read only*.
> //
> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE :
> FALSE;
> - if (CetEnabled) {
> - //
> - // CET must be disabled if WP is disabled.
> - //
> - DisableCet ();
> - }
> -
> - AsmWriteCr0 (AsmReadCr0 () & ~CR0_WP);
> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
>
> // Set memory used by page table as Read Only.
> DEBUG ((DEBUG_INFO, "Start...\n")); @@ -1811,20 +1836,12 @@
> SetPageTableAttributes (
> //
> // Enable write protection, after page table attribute updated.
> //
> - AsmWriteCr0 (AsmReadCr0 () | CR0_WP);
> + EnableReadOnlyPageWriteProtect (TRUE, CetEnabled);
> mIsReadOnlyPageTable = TRUE;
>
> //
> // Flush TLB after mark all page table pool as read only.
> //
> FlushTlbForAll ();
> -
> - if (CetEnabled) {
> - //
> - // re-enable CET.
> - //
> - EnableCet ();
> - }
> -
> return;
> }
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
2023-05-23 9:14 ` duntan
@ 2023-05-24 18:39 ` Kun Qin
2023-05-25 0:46 ` Ni, Ray
0 siblings, 1 reply; 44+ messages in thread
From: Kun Qin @ 2023-05-24 18:39 UTC (permalink / raw)
To: devel, dun.tan; +Cc: Dong, Eric, Ni, Ray, Kumar, Rahul R, Gerd Hoffmann
Hi Dun,
Thanks for your reply. That was helpful!
Just a follow-up question, is there any plan to support heap guard with
PcdCpuSmmRestrictedMemoryAccess enabled after these changes? I think it
would be a great value prop for the developers to have both features
enabled during firmware validation process.
Thanks,
Kun
On 5/23/2023 2:14 AM, duntan wrote:
> Hi Kun,
>
> I've updated my answers in your original mail.
>
> Thanks,
> Dun
>
> -----Original Message-----
> From: Kun Qin <kuqin12@gmail.com>
> Sent: Saturday, May 20, 2023 10:00 AM
> To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: Re: [edk2-devel] [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
>
> Hi Dun,
>
> Thanks for the notice on the other thread (v4 04/15).
>
> I have a few more questions on this specific patch (and a few associated commits related to it):
>
> Why would we allow page table manipulation after `mIsReadOnlyPageTable` is evaluated to TRUE?
> Dun: `mIsReadOnlyPageTable` is a flag to indicate that current page table has been marked as RO and the new allocated pool should also be RO. We only need to clear Cr0.WP before modify page table.
>
>
> As far as I can tell, `mIsReadOnlyPageTable` is set to TRUE inside `SetPageTableAttributes` function, but then we also have code in `InitializePageTablePool` to expect more page tables to be allocated.
> Could you please let me when this would happen?
> Dun: After `SetPageTableAttributes`, in 'SmmCpuFeaturesCompleteSmmReadyToLock()' API of different platform SmmCpuFeaturesLib, EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL may be used to convert memory attribute. Also, in SMI handler after ReadyToLock, EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL still may be used to convert memory attribute. During this process, if page split happens, new page table pool may be allocated.
>
>
> I thought there would not be any new page table memory (i.e. memory attribute update) after ready to lock with restricted memory access option? With these change, it seems to be doable through EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL now, is that correct? If so, would you mind shedding some light on what other behavior changes there might be?
> Dun: Do you mean that we should check if ReadyToLock in EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL implementation to make sure that page table won't be modified after ReadyToLock?
> If is, as I mentioned above, page table still may be modified after ReadyToLock.
>
>
> In addition, I might miss it in the patch series. If the newly allocated page memory is marked as read only after the above flag is set to TRUE, how would the callers able to use them?
> Dun: Caller can clear the Cr0.WP before modifying the page table.
>
>
> Thanks in advance.
>
> Regards,
> Kun
>
> On 5/16/2023 2:59 AM, duntan wrote:
>> Add two functions to disable/enable CR0.WP. These two unctions will
>> also be used in later commits. This commit doesn't change any
>> functionality.
>>
>> Signed-off-by: Dun Tan <dun.tan@intel.com>
>> Cc: Eric Dong <eric.dong@intel.com>
>> Cc: Ray Ni <ray.ni@intel.com>
>> Cc: Rahul Kumar <rahul1.kumar@intel.com>
>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>> ---
>> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 24 ++++++++++++++++++++++++
>> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------------------------
>> 2 files changed, 90 insertions(+), 49 deletions(-)
>>
>> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
>> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
>> index ba341cadc6..e0c4ca76dc 100644
>> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
>> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
>> @@ -1565,4 +1565,28 @@ SmmWaitForApArrival (
>> VOID
>> );
>>
>> +/**
>> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
>> +
>> + @param[out] WpEnabled If Cr0.WP is enabled.
>> + @param[out] CetEnabled If CET is enabled.
>> +**/
>> +VOID
>> +DisableReadOnlyPageWriteProtect (
>> + OUT BOOLEAN *WpEnabled,
>> + OUT BOOLEAN *CetEnabled
>> + );
>> +
>> +/**
>> + Enable Write Protect on pages marked as read-only.
>> +
>> + @param[out] WpEnabled If Cr0.WP should be enabled.
>> + @param[out] CetEnabled If CET should be enabled.
>> +**/
>> +VOID
>> +EnableReadOnlyPageWriteProtect (
>> + BOOLEAN WpEnabled,
>> + BOOLEAN CetEnabled
>> + );
>> +
>> #endif
>> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
>> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
>> index 2faee8f859..4b512edf68 100644
>> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
>> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
>> @@ -40,6 +40,64 @@ PAGE_TABLE_POOL *mPageTablePool = NULL;
>> //
>> BOOLEAN mIsReadOnlyPageTable = FALSE;
>>
>> +/**
>> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
>> +
>> + @param[out] WpEnabled If Cr0.WP is enabled.
>> + @param[out] CetEnabled If CET is enabled.
>> +**/
>> +VOID
>> +DisableReadOnlyPageWriteProtect (
>> + OUT BOOLEAN *WpEnabled,
>> + OUT BOOLEAN *CetEnabled
>> + )
>> +{
>> + IA32_CR0 Cr0;
>> +
>> + *CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
>> + Cr0.UintN = AsmReadCr0 ();
>> + *WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE; if (*WpEnabled) {
>> + if (*CetEnabled) {
>> + //
>> + // CET must be disabled if WP is disabled. Disable CET before clearing CR0.WP.
>> + //
>> + DisableCet ();
>> + }
>> +
>> + Cr0.Bits.WP = 0;
>> + AsmWriteCr0 (Cr0.UintN);
>> + }
>> +}
>> +
>> +/**
>> + Enable Write Protect on pages marked as read-only.
>> +
>> + @param[out] WpEnabled If Cr0.WP should be enabled.
>> + @param[out] CetEnabled If CET should be enabled.
>> +**/
>> +VOID
>> +EnableReadOnlyPageWriteProtect (
>> + BOOLEAN WpEnabled,
>> + BOOLEAN CetEnabled
>> + )
>> +{
>> + IA32_CR0 Cr0;
>> +
>> + if (WpEnabled) {
>> + Cr0.UintN = AsmReadCr0 ();
>> + Cr0.Bits.WP = 1;
>> + AsmWriteCr0 (Cr0.UintN);
>> +
>> + if (CetEnabled) {
>> + //
>> + // re-enable CET.
>> + //
>> + EnableCet ();
>> + }
>> + }
>> +}
>> +
>> /**
>> Initialize a buffer pool for page table use only.
>>
>> @@ -62,10 +120,9 @@ InitializePageTablePool (
>> IN UINTN PoolPages
>> )
>> {
>> - VOID *Buffer;
>> - BOOLEAN CetEnabled;
>> - BOOLEAN WpEnabled;
>> - IA32_CR0 Cr0;
>> + VOID *Buffer;
>> + BOOLEAN WpEnabled;
>> + BOOLEAN CetEnabled;
>>
>> //
>> // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including
>> one page for @@ -102,34 +159,9 @@ InitializePageTablePool (
>> // If page table memory has been marked as RO, mark the new pool pages as read-only.
>> //
>> if (mIsReadOnlyPageTable) {
>> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
>> - Cr0.UintN = AsmReadCr0 ();
>> - WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
>> - if (WpEnabled) {
>> - if (CetEnabled) {
>> - //
>> - // CET must be disabled if WP is disabled. Disable CET before clearing CR0.WP.
>> - //
>> - DisableCet ();
>> - }
>> -
>> - Cr0.Bits.WP = 0;
>> - AsmWriteCr0 (Cr0.UintN);
>> - }
>> -
>> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
>> SmmSetMemoryAttributes ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer, EFI_PAGES_TO_SIZE (PoolPages), EFI_MEMORY_RO);
>> - if (WpEnabled) {
>> - Cr0.UintN = AsmReadCr0 ();
>> - Cr0.Bits.WP = 1;
>> - AsmWriteCr0 (Cr0.UintN);
>> -
>> - if (CetEnabled) {
>> - //
>> - // re-enable CET.
>> - //
>> - EnableCet ();
>> - }
>> - }
>> + EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
>> }
>>
>> return TRUE;
>> @@ -1782,6 +1814,7 @@ SetPageTableAttributes (
>> VOID
>> )
>> {
>> + BOOLEAN WpEnabled;
>> BOOLEAN CetEnabled;
>>
>> if (!IfReadOnlyPageTableNeeded ()) { @@ -1794,15 +1827,7 @@
>> SetPageTableAttributes (
>> // Disable write protection, because we need mark page table to be write protected.
>> // We need *write* page table memory, to mark itself to be *read only*.
>> //
>> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE :
>> FALSE;
>> - if (CetEnabled) {
>> - //
>> - // CET must be disabled if WP is disabled.
>> - //
>> - DisableCet ();
>> - }
>> -
>> - AsmWriteCr0 (AsmReadCr0 () & ~CR0_WP);
>> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
>>
>> // Set memory used by page table as Read Only.
>> DEBUG ((DEBUG_INFO, "Start...\n")); @@ -1811,20 +1836,12 @@
>> SetPageTableAttributes (
>> //
>> // Enable write protection, after page table attribute updated.
>> //
>> - AsmWriteCr0 (AsmReadCr0 () | CR0_WP);
>> + EnableReadOnlyPageWriteProtect (TRUE, CetEnabled);
>> mIsReadOnlyPageTable = TRUE;
>>
>> //
>> // Flush TLB after mark all page table pool as read only.
>> //
>> FlushTlbForAll ();
>> -
>> - if (CetEnabled) {
>> - //
>> - // re-enable CET.
>> - //
>> - EnableCet ();
>> - }
>> -
>> return;
>> }
>
>
>
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
2023-05-24 18:39 ` Kun Qin
@ 2023-05-25 0:46 ` Ni, Ray
2023-05-26 2:48 ` Kun Qin
0 siblings, 1 reply; 44+ messages in thread
From: Ni, Ray @ 2023-05-25 0:46 UTC (permalink / raw)
To: Kun Qin, devel@edk2.groups.io, Tan, Dun
Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Kun,
Thanks for raising that up😊
We have some ideas. Will post them later.
Looking forward to work with community together.
Thanks,
Ray
> -----Original Message-----
> From: Kun Qin <kuqin12@gmail.com>
> Sent: Thursday, May 25, 2023 2:39 AM
> To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar,
> Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: Re: [edk2-devel] [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm:
> Add 2 function to disable/enable CR0.WP
>
> Hi Dun,
>
> Thanks for your reply. That was helpful!
>
> Just a follow-up question, is there any plan to support heap guard with
> PcdCpuSmmRestrictedMemoryAccess enabled after these changes? I think it
> would be a great value prop for the developers to have both features
> enabled during firmware validation process.
>
> Thanks,
> Kun
>
> On 5/23/2023 2:14 AM, duntan wrote:
> > Hi Kun,
> >
> > I've updated my answers in your original mail.
> >
> > Thanks,
> > Dun
> >
> > -----Original Message-----
> > From: Kun Qin <kuqin12@gmail.com>
> > Sent: Saturday, May 20, 2023 10:00 AM
> > To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
> > Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar,
> Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> > Subject: Re: [edk2-devel] [Patch V4 07/15]
> UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
> >
> > Hi Dun,
> >
> > Thanks for the notice on the other thread (v4 04/15).
> >
> > I have a few more questions on this specific patch (and a few associated
> commits related to it):
> >
> > Why would we allow page table manipulation after `mIsReadOnlyPageTable`
> is evaluated to TRUE?
> > Dun: `mIsReadOnlyPageTable` is a flag to indicate that current page table has
> been marked as RO and the new allocated pool should also be RO. We only
> need to clear Cr0.WP before modify page table.
> >
> >
> > As far as I can tell, `mIsReadOnlyPageTable` is set to TRUE inside
> `SetPageTableAttributes` function, but then we also have code in
> `InitializePageTablePool` to expect more page tables to be allocated.
> > Could you please let me when this would happen?
> > Dun: After `SetPageTableAttributes`, in
> 'SmmCpuFeaturesCompleteSmmReadyToLock()' API of different platform
> SmmCpuFeaturesLib, EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL may be
> used to convert memory attribute. Also, in SMI handler after ReadyToLock,
> EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL still may be used to convert
> memory attribute. During this process, if page split happens, new page table
> pool may be allocated.
> >
> >
> > I thought there would not be any new page table memory (i.e. memory
> attribute update) after ready to lock with restricted memory access option?
> With these change, it seems to be doable through
> EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL now, is that correct? If so,
> would you mind shedding some light on what other behavior changes there
> might be?
> > Dun: Do you mean that we should check if ReadyToLock in
> EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL implementation to make sure
> that page table won't be modified after ReadyToLock?
> > If is, as I mentioned above, page table still may be modified after
> ReadyToLock.
> >
> >
> > In addition, I might miss it in the patch series. If the newly allocated page
> memory is marked as read only after the above flag is set to TRUE, how would
> the callers able to use them?
> > Dun: Caller can clear the Cr0.WP before modifying the page table.
> >
> >
> > Thanks in advance.
> >
> > Regards,
> > Kun
> >
> > On 5/16/2023 2:59 AM, duntan wrote:
> >> Add two functions to disable/enable CR0.WP. These two unctions will
> >> also be used in later commits. This commit doesn't change any
> >> functionality.
> >>
> >> Signed-off-by: Dun Tan <dun.tan@intel.com>
> >> Cc: Eric Dong <eric.dong@intel.com>
> >> Cc: Ray Ni <ray.ni@intel.com>
> >> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> >> Cc: Gerd Hoffmann <kraxel@redhat.com>
> >> ---
> >> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 24
> ++++++++++++++++++++++++
> >> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 115
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++-------------------------------------------------
> >> 2 files changed, 90 insertions(+), 49 deletions(-)
> >>
> >> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> >> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> >> index ba341cadc6..e0c4ca76dc 100644
> >> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> >> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> >> @@ -1565,4 +1565,28 @@ SmmWaitForApArrival (
> >> VOID
> >> );
> >>
> >> +/**
> >> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
> >> +
> >> + @param[out] WpEnabled If Cr0.WP is enabled.
> >> + @param[out] CetEnabled If CET is enabled.
> >> +**/
> >> +VOID
> >> +DisableReadOnlyPageWriteProtect (
> >> + OUT BOOLEAN *WpEnabled,
> >> + OUT BOOLEAN *CetEnabled
> >> + );
> >> +
> >> +/**
> >> + Enable Write Protect on pages marked as read-only.
> >> +
> >> + @param[out] WpEnabled If Cr0.WP should be enabled.
> >> + @param[out] CetEnabled If CET should be enabled.
> >> +**/
> >> +VOID
> >> +EnableReadOnlyPageWriteProtect (
> >> + BOOLEAN WpEnabled,
> >> + BOOLEAN CetEnabled
> >> + );
> >> +
> >> #endif
> >> diff --git
> a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> >> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> >> index 2faee8f859..4b512edf68 100644
> >> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> >> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> >> @@ -40,6 +40,64 @@ PAGE_TABLE_POOL *mPageTablePool = NULL;
> >> //
> >> BOOLEAN mIsReadOnlyPageTable = FALSE;
> >>
> >> +/**
> >> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
> >> +
> >> + @param[out] WpEnabled If Cr0.WP is enabled.
> >> + @param[out] CetEnabled If CET is enabled.
> >> +**/
> >> +VOID
> >> +DisableReadOnlyPageWriteProtect (
> >> + OUT BOOLEAN *WpEnabled,
> >> + OUT BOOLEAN *CetEnabled
> >> + )
> >> +{
> >> + IA32_CR0 Cr0;
> >> +
> >> + *CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE :
> FALSE;
> >> + Cr0.UintN = AsmReadCr0 ();
> >> + *WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE; if (*WpEnabled) {
> >> + if (*CetEnabled) {
> >> + //
> >> + // CET must be disabled if WP is disabled. Disable CET before clearing
> CR0.WP.
> >> + //
> >> + DisableCet ();
> >> + }
> >> +
> >> + Cr0.Bits.WP = 0;
> >> + AsmWriteCr0 (Cr0.UintN);
> >> + }
> >> +}
> >> +
> >> +/**
> >> + Enable Write Protect on pages marked as read-only.
> >> +
> >> + @param[out] WpEnabled If Cr0.WP should be enabled.
> >> + @param[out] CetEnabled If CET should be enabled.
> >> +**/
> >> +VOID
> >> +EnableReadOnlyPageWriteProtect (
> >> + BOOLEAN WpEnabled,
> >> + BOOLEAN CetEnabled
> >> + )
> >> +{
> >> + IA32_CR0 Cr0;
> >> +
> >> + if (WpEnabled) {
> >> + Cr0.UintN = AsmReadCr0 ();
> >> + Cr0.Bits.WP = 1;
> >> + AsmWriteCr0 (Cr0.UintN);
> >> +
> >> + if (CetEnabled) {
> >> + //
> >> + // re-enable CET.
> >> + //
> >> + EnableCet ();
> >> + }
> >> + }
> >> +}
> >> +
> >> /**
> >> Initialize a buffer pool for page table use only.
> >>
> >> @@ -62,10 +120,9 @@ InitializePageTablePool (
> >> IN UINTN PoolPages
> >> )
> >> {
> >> - VOID *Buffer;
> >> - BOOLEAN CetEnabled;
> >> - BOOLEAN WpEnabled;
> >> - IA32_CR0 Cr0;
> >> + VOID *Buffer;
> >> + BOOLEAN WpEnabled;
> >> + BOOLEAN CetEnabled;
> >>
> >> //
> >> // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including
> >> one page for @@ -102,34 +159,9 @@ InitializePageTablePool (
> >> // If page table memory has been marked as RO, mark the new pool
> pages as read-only.
> >> //
> >> if (mIsReadOnlyPageTable) {
> >> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
> >> - Cr0.UintN = AsmReadCr0 ();
> >> - WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
> >> - if (WpEnabled) {
> >> - if (CetEnabled) {
> >> - //
> >> - // CET must be disabled if WP is disabled. Disable CET before clearing
> CR0.WP.
> >> - //
> >> - DisableCet ();
> >> - }
> >> -
> >> - Cr0.Bits.WP = 0;
> >> - AsmWriteCr0 (Cr0.UintN);
> >> - }
> >> -
> >> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
> >> SmmSetMemoryAttributes ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer,
> EFI_PAGES_TO_SIZE (PoolPages), EFI_MEMORY_RO);
> >> - if (WpEnabled) {
> >> - Cr0.UintN = AsmReadCr0 ();
> >> - Cr0.Bits.WP = 1;
> >> - AsmWriteCr0 (Cr0.UintN);
> >> -
> >> - if (CetEnabled) {
> >> - //
> >> - // re-enable CET.
> >> - //
> >> - EnableCet ();
> >> - }
> >> - }
> >> + EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
> >> }
> >>
> >> return TRUE;
> >> @@ -1782,6 +1814,7 @@ SetPageTableAttributes (
> >> VOID
> >> )
> >> {
> >> + BOOLEAN WpEnabled;
> >> BOOLEAN CetEnabled;
> >>
> >> if (!IfReadOnlyPageTableNeeded ()) { @@ -1794,15 +1827,7 @@
> >> SetPageTableAttributes (
> >> // Disable write protection, because we need mark page table to be write
> protected.
> >> // We need *write* page table memory, to mark itself to be *read only*.
> >> //
> >> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE :
> >> FALSE;
> >> - if (CetEnabled) {
> >> - //
> >> - // CET must be disabled if WP is disabled.
> >> - //
> >> - DisableCet ();
> >> - }
> >> -
> >> - AsmWriteCr0 (AsmReadCr0 () & ~CR0_WP);
> >> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
> >>
> >> // Set memory used by page table as Read Only.
> >> DEBUG ((DEBUG_INFO, "Start...\n")); @@ -1811,20 +1836,12 @@
> >> SetPageTableAttributes (
> >> //
> >> // Enable write protection, after page table attribute updated.
> >> //
> >> - AsmWriteCr0 (AsmReadCr0 () | CR0_WP);
> >> + EnableReadOnlyPageWriteProtect (TRUE, CetEnabled);
> >> mIsReadOnlyPageTable = TRUE;
> >>
> >> //
> >> // Flush TLB after mark all page table pool as read only.
> >> //
> >> FlushTlbForAll ();
> >> -
> >> - if (CetEnabled) {
> >> - //
> >> - // re-enable CET.
> >> - //
> >> - EnableCet ();
> >> - }
> >> -
> >> return;
> >> }
> >
> >
> >
> >
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
2023-05-25 0:46 ` Ni, Ray
@ 2023-05-26 2:48 ` Kun Qin
0 siblings, 0 replies; 44+ messages in thread
From: Kun Qin @ 2023-05-26 2:48 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io, Tan, Dun
Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Thanks, Ray. Looking forward to seeing the ideas on this feature!
Regards,
Kun
On 5/24/2023 5:46 PM, Ni, Ray wrote:
> Kun,
> Thanks for raising that up😊
>
> We have some ideas. Will post them later.
> Looking forward to work with community together.
>
> Thanks,
> Ray
>
>> -----Original Message-----
>> From: Kun Qin <kuqin12@gmail.com>
>> Sent: Thursday, May 25, 2023 2:39 AM
>> To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
>> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar,
>> Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
>> Subject: Re: [edk2-devel] [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm:
>> Add 2 function to disable/enable CR0.WP
>>
>> Hi Dun,
>>
>> Thanks for your reply. That was helpful!
>>
>> Just a follow-up question, is there any plan to support heap guard with
>> PcdCpuSmmRestrictedMemoryAccess enabled after these changes? I think it
>> would be a great value prop for the developers to have both features
>> enabled during firmware validation process.
>>
>> Thanks,
>> Kun
>>
>> On 5/23/2023 2:14 AM, duntan wrote:
>>> Hi Kun,
>>>
>>> I've updated my answers in your original mail.
>>>
>>> Thanks,
>>> Dun
>>>
>>> -----Original Message-----
>>> From: Kun Qin <kuqin12@gmail.com>
>>> Sent: Saturday, May 20, 2023 10:00 AM
>>> To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
>>> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar,
>> Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
>>> Subject: Re: [edk2-devel] [Patch V4 07/15]
>> UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
>>> Hi Dun,
>>>
>>> Thanks for the notice on the other thread (v4 04/15).
>>>
>>> I have a few more questions on this specific patch (and a few associated
>> commits related to it):
>>> Why would we allow page table manipulation after `mIsReadOnlyPageTable`
>> is evaluated to TRUE?
>>> Dun: `mIsReadOnlyPageTable` is a flag to indicate that current page table has
>> been marked as RO and the new allocated pool should also be RO. We only
>> need to clear Cr0.WP before modify page table.
>>>
>>> As far as I can tell, `mIsReadOnlyPageTable` is set to TRUE inside
>> `SetPageTableAttributes` function, but then we also have code in
>> `InitializePageTablePool` to expect more page tables to be allocated.
>>> Could you please let me when this would happen?
>>> Dun: After `SetPageTableAttributes`, in
>> 'SmmCpuFeaturesCompleteSmmReadyToLock()' API of different platform
>> SmmCpuFeaturesLib, EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL may be
>> used to convert memory attribute. Also, in SMI handler after ReadyToLock,
>> EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL still may be used to convert
>> memory attribute. During this process, if page split happens, new page table
>> pool may be allocated.
>>>
>>> I thought there would not be any new page table memory (i.e. memory
>> attribute update) after ready to lock with restricted memory access option?
>> With these change, it seems to be doable through
>> EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL now, is that correct? If so,
>> would you mind shedding some light on what other behavior changes there
>> might be?
>>> Dun: Do you mean that we should check if ReadyToLock in
>> EDKII_SMM_MEMORY_ATTRIBUTE_PROTOCOL implementation to make sure
>> that page table won't be modified after ReadyToLock?
>>> If is, as I mentioned above, page table still may be modified after
>> ReadyToLock.
>>>
>>> In addition, I might miss it in the patch series. If the newly allocated page
>> memory is marked as read only after the above flag is set to TRUE, how would
>> the callers able to use them?
>>> Dun: Caller can clear the Cr0.WP before modifying the page table.
>>>
>>>
>>> Thanks in advance.
>>>
>>> Regards,
>>> Kun
>>>
>>> On 5/16/2023 2:59 AM, duntan wrote:
>>>> Add two functions to disable/enable CR0.WP. These two unctions will
>>>> also be used in later commits. This commit doesn't change any
>>>> functionality.
>>>>
>>>> Signed-off-by: Dun Tan <dun.tan@intel.com>
>>>> Cc: Eric Dong <eric.dong@intel.com>
>>>> Cc: Ray Ni <ray.ni@intel.com>
>>>> Cc: Rahul Kumar <rahul1.kumar@intel.com>
>>>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>>>> ---
>>>> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 24
>> ++++++++++++++++++++++++
>>>> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 115
>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> ++++++++-------------------------------------------------
>>>> 2 files changed, 90 insertions(+), 49 deletions(-)
>>>>
>>>> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
>>>> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
>>>> index ba341cadc6..e0c4ca76dc 100644
>>>> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
>>>> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
>>>> @@ -1565,4 +1565,28 @@ SmmWaitForApArrival (
>>>> VOID
>>>> );
>>>>
>>>> +/**
>>>> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
>>>> +
>>>> + @param[out] WpEnabled If Cr0.WP is enabled.
>>>> + @param[out] CetEnabled If CET is enabled.
>>>> +**/
>>>> +VOID
>>>> +DisableReadOnlyPageWriteProtect (
>>>> + OUT BOOLEAN *WpEnabled,
>>>> + OUT BOOLEAN *CetEnabled
>>>> + );
>>>> +
>>>> +/**
>>>> + Enable Write Protect on pages marked as read-only.
>>>> +
>>>> + @param[out] WpEnabled If Cr0.WP should be enabled.
>>>> + @param[out] CetEnabled If CET should be enabled.
>>>> +**/
>>>> +VOID
>>>> +EnableReadOnlyPageWriteProtect (
>>>> + BOOLEAN WpEnabled,
>>>> + BOOLEAN CetEnabled
>>>> + );
>>>> +
>>>> #endif
>>>> diff --git
>> a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
>>>> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
>>>> index 2faee8f859..4b512edf68 100644
>>>> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
>>>> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
>>>> @@ -40,6 +40,64 @@ PAGE_TABLE_POOL *mPageTablePool = NULL;
>>>> //
>>>> BOOLEAN mIsReadOnlyPageTable = FALSE;
>>>>
>>>> +/**
>>>> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
>>>> +
>>>> + @param[out] WpEnabled If Cr0.WP is enabled.
>>>> + @param[out] CetEnabled If CET is enabled.
>>>> +**/
>>>> +VOID
>>>> +DisableReadOnlyPageWriteProtect (
>>>> + OUT BOOLEAN *WpEnabled,
>>>> + OUT BOOLEAN *CetEnabled
>>>> + )
>>>> +{
>>>> + IA32_CR0 Cr0;
>>>> +
>>>> + *CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE :
>> FALSE;
>>>> + Cr0.UintN = AsmReadCr0 ();
>>>> + *WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE; if (*WpEnabled) {
>>>> + if (*CetEnabled) {
>>>> + //
>>>> + // CET must be disabled if WP is disabled. Disable CET before clearing
>> CR0.WP.
>>>> + //
>>>> + DisableCet ();
>>>> + }
>>>> +
>>>> + Cr0.Bits.WP = 0;
>>>> + AsmWriteCr0 (Cr0.UintN);
>>>> + }
>>>> +}
>>>> +
>>>> +/**
>>>> + Enable Write Protect on pages marked as read-only.
>>>> +
>>>> + @param[out] WpEnabled If Cr0.WP should be enabled.
>>>> + @param[out] CetEnabled If CET should be enabled.
>>>> +**/
>>>> +VOID
>>>> +EnableReadOnlyPageWriteProtect (
>>>> + BOOLEAN WpEnabled,
>>>> + BOOLEAN CetEnabled
>>>> + )
>>>> +{
>>>> + IA32_CR0 Cr0;
>>>> +
>>>> + if (WpEnabled) {
>>>> + Cr0.UintN = AsmReadCr0 ();
>>>> + Cr0.Bits.WP = 1;
>>>> + AsmWriteCr0 (Cr0.UintN);
>>>> +
>>>> + if (CetEnabled) {
>>>> + //
>>>> + // re-enable CET.
>>>> + //
>>>> + EnableCet ();
>>>> + }
>>>> + }
>>>> +}
>>>> +
>>>> /**
>>>> Initialize a buffer pool for page table use only.
>>>>
>>>> @@ -62,10 +120,9 @@ InitializePageTablePool (
>>>> IN UINTN PoolPages
>>>> )
>>>> {
>>>> - VOID *Buffer;
>>>> - BOOLEAN CetEnabled;
>>>> - BOOLEAN WpEnabled;
>>>> - IA32_CR0 Cr0;
>>>> + VOID *Buffer;
>>>> + BOOLEAN WpEnabled;
>>>> + BOOLEAN CetEnabled;
>>>>
>>>> //
>>>> // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including
>>>> one page for @@ -102,34 +159,9 @@ InitializePageTablePool (
>>>> // If page table memory has been marked as RO, mark the new pool
>> pages as read-only.
>>>> //
>>>> if (mIsReadOnlyPageTable) {
>>>> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
>>>> - Cr0.UintN = AsmReadCr0 ();
>>>> - WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
>>>> - if (WpEnabled) {
>>>> - if (CetEnabled) {
>>>> - //
>>>> - // CET must be disabled if WP is disabled. Disable CET before clearing
>> CR0.WP.
>>>> - //
>>>> - DisableCet ();
>>>> - }
>>>> -
>>>> - Cr0.Bits.WP = 0;
>>>> - AsmWriteCr0 (Cr0.UintN);
>>>> - }
>>>> -
>>>> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
>>>> SmmSetMemoryAttributes ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer,
>> EFI_PAGES_TO_SIZE (PoolPages), EFI_MEMORY_RO);
>>>> - if (WpEnabled) {
>>>> - Cr0.UintN = AsmReadCr0 ();
>>>> - Cr0.Bits.WP = 1;
>>>> - AsmWriteCr0 (Cr0.UintN);
>>>> -
>>>> - if (CetEnabled) {
>>>> - //
>>>> - // re-enable CET.
>>>> - //
>>>> - EnableCet ();
>>>> - }
>>>> - }
>>>> + EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
>>>> }
>>>>
>>>> return TRUE;
>>>> @@ -1782,6 +1814,7 @@ SetPageTableAttributes (
>>>> VOID
>>>> )
>>>> {
>>>> + BOOLEAN WpEnabled;
>>>> BOOLEAN CetEnabled;
>>>>
>>>> if (!IfReadOnlyPageTableNeeded ()) { @@ -1794,15 +1827,7 @@
>>>> SetPageTableAttributes (
>>>> // Disable write protection, because we need mark page table to be write
>> protected.
>>>> // We need *write* page table memory, to mark itself to be *read only*.
>>>> //
>>>> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE :
>>>> FALSE;
>>>> - if (CetEnabled) {
>>>> - //
>>>> - // CET must be disabled if WP is disabled.
>>>> - //
>>>> - DisableCet ();
>>>> - }
>>>> -
>>>> - AsmWriteCr0 (AsmReadCr0 () & ~CR0_WP);
>>>> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
>>>>
>>>> // Set memory used by page table as Read Only.
>>>> DEBUG ((DEBUG_INFO, "Start...\n")); @@ -1811,20 +1836,12 @@
>>>> SetPageTableAttributes (
>>>> //
>>>> // Enable write protection, after page table attribute updated.
>>>> //
>>>> - AsmWriteCr0 (AsmReadCr0 () | CR0_WP);
>>>> + EnableReadOnlyPageWriteProtect (TRUE, CetEnabled);
>>>> mIsReadOnlyPageTable = TRUE;
>>>>
>>>> //
>>>> // Flush TLB after mark all page table pool as read only.
>>>> //
>>>> FlushTlbForAll ();
>>>> -
>>>> - if (CetEnabled) {
>>>> - //
>>>> - // re-enable CET.
>>>> - //
>>>> - EnableCet ();
>>>> - }
>>>> -
>>>> return;
>>>> }
>>>
>>>
>>>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Patch V4 05/15] UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute.
2023-05-16 9:59 ` [Patch V4 05/15] UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute duntan
@ 2023-06-01 1:09 ` Ni, Ray
0 siblings, 0 replies; 44+ messages in thread
From: Ni, Ray @ 2023-06-01 1:09 UTC (permalink / raw)
To: Tan, Dun, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> index 3deb1ffd67..a25a96f68c 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> @@ -1,7 +1,7 @@
> /** @file
> Page Fault (#PF) handler for X64 processors
>
> -Copyright (c) 2009 - 2022, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2009 - 2023, Intel Corporation. All rights reserved.<BR>
> Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
>
> SPDX-License-Identifier: BSD-2-Clause-Patent
> @@ -353,7 +353,12 @@ SmmInitPageTable (
> m1GPageTableSupport = Is1GPageSupport ();
> m5LevelPagingNeeded = Is5LevelPagingNeeded ();
> mPhysicalAddressBits = CalculateMaximumSupportAddress ();
> - PatchInstructionX86 (gPatch5LevelPagingNeeded, m5LevelPagingNeeded,
> 1);
> + if (m5LevelPagingNeeded) {
> + mPagingMode = m1GPageTableSupport ? Paging5Level1GB : Paging5Level;
> + PatchInstructionX86 (gPatch5LevelPagingNeeded, TRUE, 1);
1. this change assumes the default value in assembly is 0 while old logic doesn't
have such assumption. Can you patch the instruction no matter m5LevelPagingNeeded?
> + DEBUG_CODE (
> + if (((Attributes & EFI_MEMORY_RO) == 0) || (((Attributes &
> EFI_MEMORY_XP) == 0) && (mXdSupported))) {
> + //
> + // When mapping a range to present and EFI_MEMORY_RO or
> EFI_MEMORY_XP is not specificed,
> + // check if [BaseAddress, BaseAddress + Length] contains present range.
> + // Existing Present range in [BaseAddress, BaseAddress + Length] is set to
> NX disable and ReadOnly.
> + //
> + Count = 0;
> + Map = NULL;
> + Status = PageTableParse (PageTableBase, mPagingMode, NULL, &Count);
> + while (Status == RETURN_BUFFER_TOO_SMALL) {
> + if (Map != NULL) {
> + FreePool (Map);
> + }
>
> - if (IsModified != NULL) {
> - *IsModified = TRUE;
> + Map = AllocatePool (Count * sizeof (IA32_MAP_ENTRY));
> + ASSERT (Map != NULL);
> + Status = PageTableParse (PageTableBase, mPagingMode, Map, &Count);
> + }
> +
> + ASSERT_RETURN_ERROR (Status);
> +
> + for (Index = 0; Index < Count; Index++) {
> + if ((BaseAddress < Map[Index].LinearAddress +
> + Map[Index].Length) && (BaseAddress + Length >
> Map[Index].LinearAddress))
> + {
> + DEBUG ((DEBUG_ERROR, "SMM ConvertMemoryPageAttributes:
> Existing Present range in [0x%lx, 0x%lx] is set to NX disable and ReadOnly\n",
> BaseAddress, BaseAddress + Length));
> + break;
> + }
> + }
> +
> + FreePool (Map);
> }
2. What's the purpose of the above DEBUG_CODE()?
Because when mapping a range of memory from not-present to present,
the function clears all other attributes but only set the "present" bit.
If part of the range is "present" already, the function might reset
its other attributes. This is not expected by caller.
So, you want to notify caller?
Can you split this logic to a separate commit?
If the sub-range's attributes match to what you are going to set
for the entire range, caller can ignore such error message, right?
>
> - //
> - // Just split current page
> - // Convert success in next around
> - //
> + );
> }
> }
>
> + if (PagingAttrMask.Uint64 == 0) {
> + return RETURN_SUCCESS;
> + }
> +
> + PageTableBufferSize = 0;
> + Status = PageTableMap (&PageTableBase, PagingMode, NULL,
> &PageTableBufferSize, BaseAddress, Length, &PagingAttribute,
> &PagingAttrMask, IsModified);
> +
> + if (Status == RETURN_BUFFER_TOO_SMALL) {
> + PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES
> (PageTableBufferSize));
> + ASSERT (PageTableBuffer != NULL);
> + Status = PageTableMap (&PageTableBase, PagingMode, PageTableBuffer,
> &PageTableBufferSize, BaseAddress, Length, &PagingAttribute,
> &PagingAttrMask, IsModified);
> + }
> +
> + if (Status == RETURN_INVALID_PARAMETER) {
> + //
> + // The only reason that PageTableMap returns
> RETURN_INVALID_PARAMETER here is to modify other attributes
> + // of a non-present range but remains the non-present range still as non-
> present.
> + //
> + DEBUG ((DEBUG_ERROR, "SMM ConvertMemoryPageAttributes: Non-
> present range in [0x%lx, 0x%lx] needs to be removed\n", BaseAddress,
> BaseAddress + Length));
3. Don't quite understand. Can you describe in a clearer way?
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP
2023-05-16 9:59 ` [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP duntan
2023-05-20 2:00 ` [edk2-devel] " Kun Qin
@ 2023-06-02 3:09 ` Ni, Ray
1 sibling, 0 replies; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:09 UTC (permalink / raw)
To: Tan, Dun, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Reviewed-by: Ray Ni <ray.ni@intel.com>
> -----Original Message-----
> From: Tan, Dun <dun.tan@intel.com>
> Sent: Tuesday, May 16, 2023 5:59 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar, Rahul
> R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to
> disable/enable CR0.WP
>
> Add two functions to disable/enable CR0.WP. These two unctions
> will also be used in later commits. This commit doesn't change any
> functionality.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 24
> ++++++++++++++++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 115
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +-------------------------------------------------
> 2 files changed, 90 insertions(+), 49 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index ba341cadc6..e0c4ca76dc 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -1565,4 +1565,28 @@ SmmWaitForApArrival (
> VOID
> );
>
> +/**
> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
> +
> + @param[out] WpEnabled If Cr0.WP is enabled.
> + @param[out] CetEnabled If CET is enabled.
> +**/
> +VOID
> +DisableReadOnlyPageWriteProtect (
> + OUT BOOLEAN *WpEnabled,
> + OUT BOOLEAN *CetEnabled
> + );
> +
> +/**
> + Enable Write Protect on pages marked as read-only.
> +
> + @param[out] WpEnabled If Cr0.WP should be enabled.
> + @param[out] CetEnabled If CET should be enabled.
> +**/
> +VOID
> +EnableReadOnlyPageWriteProtect (
> + BOOLEAN WpEnabled,
> + BOOLEAN CetEnabled
> + );
> +
> #endif
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index 2faee8f859..4b512edf68 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -40,6 +40,64 @@ PAGE_TABLE_POOL *mPageTablePool = NULL;
> //
> BOOLEAN mIsReadOnlyPageTable = FALSE;
>
> +/**
> + Disable Write Protect on pages marked as read-only if Cr0.Bits.WP is 1.
> +
> + @param[out] WpEnabled If Cr0.WP is enabled.
> + @param[out] CetEnabled If CET is enabled.
> +**/
> +VOID
> +DisableReadOnlyPageWriteProtect (
> + OUT BOOLEAN *WpEnabled,
> + OUT BOOLEAN *CetEnabled
> + )
> +{
> + IA32_CR0 Cr0;
> +
> + *CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
> + Cr0.UintN = AsmReadCr0 ();
> + *WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
> + if (*WpEnabled) {
> + if (*CetEnabled) {
> + //
> + // CET must be disabled if WP is disabled. Disable CET before clearing
> CR0.WP.
> + //
> + DisableCet ();
> + }
> +
> + Cr0.Bits.WP = 0;
> + AsmWriteCr0 (Cr0.UintN);
> + }
> +}
> +
> +/**
> + Enable Write Protect on pages marked as read-only.
> +
> + @param[out] WpEnabled If Cr0.WP should be enabled.
> + @param[out] CetEnabled If CET should be enabled.
> +**/
> +VOID
> +EnableReadOnlyPageWriteProtect (
> + BOOLEAN WpEnabled,
> + BOOLEAN CetEnabled
> + )
> +{
> + IA32_CR0 Cr0;
> +
> + if (WpEnabled) {
> + Cr0.UintN = AsmReadCr0 ();
> + Cr0.Bits.WP = 1;
> + AsmWriteCr0 (Cr0.UintN);
> +
> + if (CetEnabled) {
> + //
> + // re-enable CET.
> + //
> + EnableCet ();
> + }
> + }
> +}
> +
> /**
> Initialize a buffer pool for page table use only.
>
> @@ -62,10 +120,9 @@ InitializePageTablePool (
> IN UINTN PoolPages
> )
> {
> - VOID *Buffer;
> - BOOLEAN CetEnabled;
> - BOOLEAN WpEnabled;
> - IA32_CR0 Cr0;
> + VOID *Buffer;
> + BOOLEAN WpEnabled;
> + BOOLEAN CetEnabled;
>
> //
> // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES, including one page
> for
> @@ -102,34 +159,9 @@ InitializePageTablePool (
> // If page table memory has been marked as RO, mark the new pool pages as
> read-only.
> //
> if (mIsReadOnlyPageTable) {
> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
> - Cr0.UintN = AsmReadCr0 ();
> - WpEnabled = (Cr0.Bits.WP != 0) ? TRUE : FALSE;
> - if (WpEnabled) {
> - if (CetEnabled) {
> - //
> - // CET must be disabled if WP is disabled. Disable CET before clearing
> CR0.WP.
> - //
> - DisableCet ();
> - }
> -
> - Cr0.Bits.WP = 0;
> - AsmWriteCr0 (Cr0.UintN);
> - }
> -
> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
> SmmSetMemoryAttributes ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer,
> EFI_PAGES_TO_SIZE (PoolPages), EFI_MEMORY_RO);
> - if (WpEnabled) {
> - Cr0.UintN = AsmReadCr0 ();
> - Cr0.Bits.WP = 1;
> - AsmWriteCr0 (Cr0.UintN);
> -
> - if (CetEnabled) {
> - //
> - // re-enable CET.
> - //
> - EnableCet ();
> - }
> - }
> + EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
> }
>
> return TRUE;
> @@ -1782,6 +1814,7 @@ SetPageTableAttributes (
> VOID
> )
> {
> + BOOLEAN WpEnabled;
> BOOLEAN CetEnabled;
>
> if (!IfReadOnlyPageTableNeeded ()) {
> @@ -1794,15 +1827,7 @@ SetPageTableAttributes (
> // Disable write protection, because we need mark page table to be write
> protected.
> // We need *write* page table memory, to mark itself to be *read only*.
> //
> - CetEnabled = ((AsmReadCr4 () & CR4_CET_ENABLE) != 0) ? TRUE : FALSE;
> - if (CetEnabled) {
> - //
> - // CET must be disabled if WP is disabled.
> - //
> - DisableCet ();
> - }
> -
> - AsmWriteCr0 (AsmReadCr0 () & ~CR0_WP);
> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
>
> // Set memory used by page table as Read Only.
> DEBUG ((DEBUG_INFO, "Start...\n"));
> @@ -1811,20 +1836,12 @@ SetPageTableAttributes (
> //
> // Enable write protection, after page table attribute updated.
> //
> - AsmWriteCr0 (AsmReadCr0 () | CR0_WP);
> + EnableReadOnlyPageWriteProtect (TRUE, CetEnabled);
> mIsReadOnlyPageTable = TRUE;
>
> //
> // Flush TLB after mark all page table pool as read only.
> //
> FlushTlbForAll ();
> -
> - if (CetEnabled) {
> - //
> - // re-enable CET.
> - //
> - EnableCet ();
> - }
> -
> return;
> }
> --
> 2.31.1.windows.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 08/15] UefiCpuPkg/PiSmmCpuDxeSmm: Clear CR0.WP before modify page table
2023-05-16 9:59 ` [Patch V4 08/15] UefiCpuPkg/PiSmmCpuDxeSmm: Clear CR0.WP before modify page table duntan
@ 2023-06-02 3:12 ` Ni, Ray
0 siblings, 0 replies; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:12 UTC (permalink / raw)
To: devel@edk2.groups.io, Tan, Dun; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Reviewed-by: Ray Ni <ray.ni@intel.com>
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of duntan
> Sent: Tuesday, May 16, 2023 5:59 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar, Rahul
> R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: [edk2-devel] [Patch V4 08/15] UefiCpuPkg/PiSmmCpuDxeSmm: Clear
> CR0.WP before modify page table
>
> Clear CR0.WP before modify smm page table. Currently, there is
> an assumption that smm pagetable is always RW before ReadyToLock.
> However, when AMD SEV is enabled, FvbServicesSmm driver calls
> MemEncryptSevClearMmioPageEncMask to clear AddressEncMask bit
> in smm page table for this range:
> [PcdOvmfFdBaseAddress,PcdOvmfFdBaseAddress+PcdOvmfFirmwareFdSize]
> If page slpit happens in this process, new memory for smm page
> table is allocated. Then the newly allocated page table memory
> is marked as RO in smm page table in this FvbServicesSmm driver,
> which may lead to PF if smm code doesn't clear CR0.WP before
> modify smm page table when ReadyToLock.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 11
> +++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 5 +++++
> 2 files changed, 16 insertions(+)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index 4b512edf68..ef0ba9a355 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -1036,6 +1036,8 @@ SetMemMapAttributes (
> IA32_MAP_ENTRY *Map;
> UINTN Count;
> UINT64 MemoryAttribute;
> + BOOLEAN WpEnabled;
> + BOOLEAN CetEnabled;
>
> SmmGetSystemConfigurationTable (&gEdkiiPiSmmMemoryAttributesTableGuid,
> (VOID **)&MemoryAttributesTable);
> if (MemoryAttributesTable == NULL) {
> @@ -1078,6 +1080,8 @@ SetMemMapAttributes (
>
> ASSERT_RETURN_ERROR (Status);
>
> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
> +
> MemoryMap = MemoryMapStart;
> for (Index = 0; Index < MemoryMapEntryCount; Index++) {
> DEBUG ((DEBUG_VERBOSE, "SetAttribute: Memory Entry - 0x%lx, 0x%x\n",
> MemoryMap->PhysicalStart, MemoryMap->NumberOfPages));
> @@ -1105,6 +1109,7 @@ SetMemMapAttributes (
> MemoryMap = NEXT_MEMORY_DESCRIPTOR (MemoryMap, DescriptorSize);
> }
>
> + EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
> FreePool (Map);
>
> PatchSmmSaveStateMap ();
> @@ -1411,9 +1416,13 @@ SetUefiMemMapAttributes (
> UINTN MemoryMapEntryCount;
> UINTN Index;
> EFI_MEMORY_DESCRIPTOR *Entry;
> + BOOLEAN WpEnabled;
> + BOOLEAN CetEnabled;
>
> DEBUG ((DEBUG_INFO, "SetUefiMemMapAttributes\n"));
>
> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
> +
> if (mUefiMemoryMap != NULL) {
> MemoryMapEntryCount = mUefiMemoryMapSize/mUefiDescriptorSize;
> MemoryMap = mUefiMemoryMap;
> @@ -1492,6 +1501,8 @@ SetUefiMemMapAttributes (
> }
> }
>
> + EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
> +
> //
> // Do not free mUefiMemoryAttributesTable, it will be checked in
> IsSmmCommBufferForbiddenAddress().
> //
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> index 1b0b6673e1..5625ba0cac 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> @@ -574,6 +574,8 @@ InitPaging (
> BOOLEAN Nx;
> IA32_CR4 Cr4;
> BOOLEAN Enable5LevelPaging;
> + BOOLEAN WpEnabled;
> + BOOLEAN CetEnabled;
>
> Cr4.UintN = AsmReadCr4 ();
> Enable5LevelPaging = (BOOLEAN)(Cr4.Bits.LA57 == 1);
> @@ -620,6 +622,7 @@ InitPaging (
> NumberOfPdptEntries = 4;
> }
>
> + DisableReadOnlyPageWriteProtect (&WpEnabled, &CetEnabled);
> //
> // Go through page table and change 2MB-page into 4KB-page.
> //
> @@ -800,6 +803,8 @@ InitPaging (
> } // end for PML4
> } // end for PML5
>
> + EnableReadOnlyPageWriteProtect (WpEnabled, CetEnabled);
> +
> //
> // Flush TLB
> //
> --
> 2.31.1.windows.1
>
>
>
>
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 09/15] UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h
2023-05-16 9:59 ` [Patch V4 09/15] UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h duntan
@ 2023-06-02 3:16 ` Ni, Ray
2023-06-02 3:36 ` duntan
0 siblings, 1 reply; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:16 UTC (permalink / raw)
To: devel@edk2.groups.io, Tan, Dun; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
You removed all "extern" and added one "extern" in PiSmmCpuDxeSmm.h.
But, where is the mSmmShadowStackSize defined?
No link error?
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of duntan
> Sent: Tuesday, May 16, 2023 5:59 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar, Rahul
> R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: [edk2-devel] [Patch V4 09/15] UefiCpuPkg: Extern
> mSmmShadowStackSize in PiSmmCpuDxeSmm.h
>
> Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h and remove
> extern for mSmmShadowStackSize in c files to simplify code.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c | 3 +--
> UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 2 --
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 1 +
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 2 --
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c | 3 +--
> 5 files changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
> index 6c48a53f67..636dc8d92f 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
> @@ -1,7 +1,7 @@
> /** @file
> SMM CPU misc functions for Ia32 arch specific.
>
> -Copyright (c) 2015 - 2019, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
> SPDX-License-Identifier: BSD-2-Clause-Patent
>
> **/
> @@ -14,7 +14,6 @@ EFI_PHYSICAL_ADDRESS mGdtBuffer;
> UINTN mGdtBufferSize;
>
> extern BOOLEAN mCetSupported;
> -extern UINTN mSmmShadowStackSize;
>
> X86_ASSEMBLY_PATCH_LABEL mPatchCetPl0Ssp;
> X86_ASSEMBLY_PATCH_LABEL mPatchCetInterruptSsp;
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> index baf827cf9d..1878252eac 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> @@ -29,8 +29,6 @@ MM_COMPLETION mSmmStartupThisApToken;
> //
> UINT32 *mPackageFirstThreadIndex = NULL;
>
> -extern UINTN mSmmShadowStackSize;
> -
> /**
> Performs an atomic compare exchange operation to get semaphore.
> The compare exchange operation must be performed using
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index e0c4ca76dc..a7da9673a5 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -262,6 +262,7 @@ extern EFI_SMM_CPU_PROTOCOL mSmmCpu;
> extern EFI_MM_MP_PROTOCOL mSmmMp;
> extern BOOLEAN m5LevelPagingNeeded;
> extern PAGING_MODE mPagingMode;
> +extern UINTN mSmmShadowStackSize;
>
> ///
> /// The mode of the CPU at the time an SMI occurs
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> index a25a96f68c..25ced50955 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> @@ -13,8 +13,6 @@ SPDX-License-Identifier: BSD-2-Clause-Patent
> #define PAGE_TABLE_PAGES 8
> #define ACC_MAX_BIT BIT3
>
> -extern UINTN mSmmShadowStackSize;
> -
> LIST_ENTRY mPagePool = INITIALIZE_LIST_HEAD_VARIABLE
> (mPagePool);
> BOOLEAN m1GPageTableSupport = FALSE;
> BOOLEAN mCpuSmmRestrictedMemoryAccess;
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
> index 00a284c369..c4f21e2155 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
> @@ -1,7 +1,7 @@
> /** @file
> SMM CPU misc functions for x64 arch specific.
>
> -Copyright (c) 2015 - 2019, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
> SPDX-License-Identifier: BSD-2-Clause-Patent
>
> **/
> @@ -12,7 +12,6 @@ EFI_PHYSICAL_ADDRESS mGdtBuffer;
> UINTN mGdtBufferSize;
>
> extern BOOLEAN mCetSupported;
> -extern UINTN mSmmShadowStackSize;
>
> X86_ASSEMBLY_PATCH_LABEL mPatchCetPl0Ssp;
> X86_ASSEMBLY_PATCH_LABEL mPatchCetInterruptSsp;
> --
> 2.31.1.windows.1
>
>
>
>
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
2023-05-16 9:59 ` [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table duntan
@ 2023-06-02 3:23 ` Ni, Ray
2023-06-02 3:36 ` duntan
0 siblings, 1 reply; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:23 UTC (permalink / raw)
To: devel@edk2.groups.io, Tan, Dun; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
//
// SMM Stack Guard Enabled
// Append Shadow Stack after normal stack
// 2 more pages is allocated for each processor, one is guard page and the other is known good shadow stack.
//
// |= Stacks
// +--------------------------------------------------+---------------------------------------------------------------+
// | Known Good Stack | Guard Page | SMM Stack | Known Good Shadow Stack | Guard Page | SMM Shadow Stack |
// +--------------------------------------------------+---------------------------------------------------------------+
// | 4K | 4K |PcdCpuSmmStackSize| 4K | 4K |PcdCpuSmmShadowStackSize|
// |<---------------- mSmmStackSize ----------------->|<--------------------- mSmmShadowStackSize ------------------->|
// | |
// |<-------------------------------------------- Processor N ------------------------------------------------------->|
//
GenSmmPageTable() only sets the "Guard page" in "mSmmStackSize range" as not-present.
But the "Guard page" in "mSmmShadowStackSize range" is not marked as not-present.
Why?
Thanks,
Ray
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of duntan
> Sent: Tuesday, May 16, 2023 5:59 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar, Rahul
> R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to
> create smm page table
>
> This commit is code refinement to current smm pagetable generation
> code. Add a new GenSmmPageTable() API to create smm page table
> based on the PageTableMap() API in CpuPageTableLib. Caller only
> needs to specify the paging mode and the PhysicalAddressBits to map.
> This function can be used to create both IA32 pae paging and X64
> 5level, 4level paging.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 2 +-
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 15
> +++++++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 65
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 220
> ++++++++++++++++++++++++++-----------------------------------------------------------
> --------------------------------------------------------------------------------------------------
> -------------------------------------
> 4 files changed, 107 insertions(+), 195 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> index 9c8107080a..b11264ce4a 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> @@ -63,7 +63,7 @@ SmmInitPageTable (
> InitializeIDTSmmStackGuard ();
> }
>
> - return Gen4GPageTable (TRUE);
> + return GenSmmPageTable (PagingPae, mPhysicalAddressBits);
> }
>
> /**
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index a7da9673a5..5399659bc0 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -553,6 +553,21 @@ Gen4GPageTable (
> IN BOOLEAN Is32BitPageTable
> );
>
> +/**
> + Create page table based on input PagingMode and PhysicalAddressBits in smm.
> +
> + @param[in] PagingMode The paging mode.
> + @param[in] PhysicalAddressBits The bits of physical address to map.
> +
> + @retval PageTable Address
> +
> +**/
> +UINTN
> +GenSmmPageTable (
> + IN PAGING_MODE PagingMode,
> + IN UINT8 PhysicalAddressBits
> + );
> +
> /**
> Initialize global data for MP synchronization.
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index ef0ba9a355..138ff43c9d 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -1642,6 +1642,71 @@ EdkiiSmmClearMemoryAttributes (
> return SmmClearMemoryAttributes (BaseAddress, Length, Attributes);
> }
>
> +/**
> + Create page table based on input PagingMode and PhysicalAddressBits in smm.
> +
> + @param[in] PagingMode The paging mode.
> + @param[in] PhysicalAddressBits The bits of physical address to map.
> +
> + @retval PageTable Address
> +
> +**/
> +UINTN
> +GenSmmPageTable (
> + IN PAGING_MODE PagingMode,
> + IN UINT8 PhysicalAddressBits
> + )
> +{
> + UINTN PageTableBufferSize;
> + UINTN PageTable;
> + VOID *PageTableBuffer;
> + IA32_MAP_ATTRIBUTE MapAttribute;
> + IA32_MAP_ATTRIBUTE MapMask;
> + RETURN_STATUS Status;
> + UINTN GuardPage;
> + UINTN Index;
> + UINT64 Length;
> +
> + Length = LShiftU64 (1, PhysicalAddressBits);
> + PageTable = 0;
> + PageTableBufferSize = 0;
> + MapMask.Uint64 = MAX_UINT64;
> + MapAttribute.Uint64 = mAddressEncMask;
> + MapAttribute.Bits.Present = 1;
> + MapAttribute.Bits.ReadWrite = 1;
> + MapAttribute.Bits.UserSupervisor = 1;
> + MapAttribute.Bits.Accessed = 1;
> + MapAttribute.Bits.Dirty = 1;
> +
> + Status = PageTableMap (&PageTable, PagingMode, NULL,
> &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> + ASSERT (Status == RETURN_BUFFER_TOO_SMALL);
> + DEBUG ((DEBUG_INFO, "GenSMMPageTable: 0x%x bytes needed for initial
> SMM page table\n", PageTableBufferSize));
> + PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES
> (PageTableBufferSize));
> + ASSERT (PageTableBuffer != NULL);
> + Status = PageTableMap (&PageTable, PagingMode, PageTableBuffer,
> &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> + ASSERT (Status == RETURN_SUCCESS);
> + ASSERT (PageTableBufferSize == 0);
> +
> + if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
> + //
> + // Mark the 4KB guard page between known good stack and smm stack as
> non-present
> + //
> + for (Index = 0; Index < gSmmCpuPrivate-
> >SmmCoreEntryContext.NumberOfCpus; Index++) {
> + GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE + Index *
> (mSmmStackSize + mSmmShadowStackSize);
> + Status = ConvertMemoryPageAttributes (PageTable, PagingMode,
> GuardPage, SIZE_4KB, EFI_MEMORY_RP, TRUE, NULL);
> + }
> + }
> +
> + if ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) {
> + //
> + // Mark [0, 4k] as non-present
> + //
> + Status = ConvertMemoryPageAttributes (PageTable, PagingMode, 0, SIZE_4KB,
> EFI_MEMORY_RP, TRUE, NULL);
> + }
> +
> + return (UINTN)PageTable;
> +}
> +
> /**
> This function retrieves the attributes of the memory region specified by
> BaseAddress and Length. If different attributes are got from different part
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> index 25ced50955..060e6dc147 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> @@ -167,160 +167,6 @@ CalculateMaximumSupportAddress (
> return PhysicalAddressBits;
> }
>
> -/**
> - Set static page table.
> -
> - @param[in] PageTable Address of page table.
> - @param[in] PhysicalAddressBits The maximum physical address bits
> supported.
> -**/
> -VOID
> -SetStaticPageTable (
> - IN UINTN PageTable,
> - IN UINT8 PhysicalAddressBits
> - )
> -{
> - UINT64 PageAddress;
> - UINTN NumberOfPml5EntriesNeeded;
> - UINTN NumberOfPml4EntriesNeeded;
> - UINTN NumberOfPdpEntriesNeeded;
> - UINTN IndexOfPml5Entries;
> - UINTN IndexOfPml4Entries;
> - UINTN IndexOfPdpEntries;
> - UINTN IndexOfPageDirectoryEntries;
> - UINT64 *PageMapLevel5Entry;
> - UINT64 *PageMapLevel4Entry;
> - UINT64 *PageMap;
> - UINT64 *PageDirectoryPointerEntry;
> - UINT64 *PageDirectory1GEntry;
> - UINT64 *PageDirectoryEntry;
> -
> - //
> - // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses
> - // when 5-Level Paging is disabled.
> - //
> - ASSERT (PhysicalAddressBits <= 52);
> - if (!m5LevelPagingNeeded && (PhysicalAddressBits > 48)) {
> - PhysicalAddressBits = 48;
> - }
> -
> - NumberOfPml5EntriesNeeded = 1;
> - if (PhysicalAddressBits > 48) {
> - NumberOfPml5EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> 48);
> - PhysicalAddressBits = 48;
> - }
> -
> - NumberOfPml4EntriesNeeded = 1;
> - if (PhysicalAddressBits > 39) {
> - NumberOfPml4EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> 39);
> - PhysicalAddressBits = 39;
> - }
> -
> - NumberOfPdpEntriesNeeded = 1;
> - ASSERT (PhysicalAddressBits > 30);
> - NumberOfPdpEntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits - 30);
> -
> - //
> - // By architecture only one PageMapLevel4 exists - so lets allocate storage for
> it.
> - //
> - PageMap = (VOID *)PageTable;
> -
> - PageMapLevel4Entry = PageMap;
> - PageMapLevel5Entry = NULL;
> - if (m5LevelPagingNeeded) {
> - //
> - // By architecture only one PageMapLevel5 exists - so lets allocate storage for
> it.
> - //
> - PageMapLevel5Entry = PageMap;
> - }
> -
> - PageAddress = 0;
> -
> - for ( IndexOfPml5Entries = 0
> - ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded
> - ; IndexOfPml5Entries++, PageMapLevel5Entry++)
> - {
> - //
> - // Each PML5 entry points to a page of PML4 entires.
> - // So lets allocate space for them and fill them in in the IndexOfPml4Entries
> loop.
> - // When 5-Level Paging is disabled, below allocation happens only once.
> - //
> - if (m5LevelPagingNeeded) {
> - PageMapLevel4Entry = (UINT64 *)((*PageMapLevel5Entry) &
> ~mAddressEncMask & gPhyMask);
> - if (PageMapLevel4Entry == NULL) {
> - PageMapLevel4Entry = AllocatePageTableMemory (1);
> - ASSERT (PageMapLevel4Entry != NULL);
> - ZeroMem (PageMapLevel4Entry, EFI_PAGES_TO_SIZE (1));
> -
> - *PageMapLevel5Entry = (UINT64)(UINTN)PageMapLevel4Entry |
> mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - }
> - }
> -
> - for (IndexOfPml4Entries = 0; IndexOfPml4Entries <
> (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512);
> IndexOfPml4Entries++, PageMapLevel4Entry++) {
> - //
> - // Each PML4 entry points to a page of Page Directory Pointer entries.
> - //
> - PageDirectoryPointerEntry = (UINT64 *)((*PageMapLevel4Entry) &
> ~mAddressEncMask & gPhyMask);
> - if (PageDirectoryPointerEntry == NULL) {
> - PageDirectoryPointerEntry = AllocatePageTableMemory (1);
> - ASSERT (PageDirectoryPointerEntry != NULL);
> - ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE (1));
> -
> - *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry |
> mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - }
> -
> - if (m1GPageTableSupport) {
> - PageDirectory1GEntry = PageDirectoryPointerEntry;
> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512;
> IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress +=
> SIZE_1GB) {
> - if ((IndexOfPml4Entries == 0) && (IndexOfPageDirectoryEntries < 4)) {
> - //
> - // Skip the < 4G entries
> - //
> - continue;
> - }
> -
> - //
> - // Fill in the Page Directory entries
> - //
> - *PageDirectory1GEntry = PageAddress | mAddressEncMask | IA32_PG_PS
> | PAGE_ATTRIBUTE_BITS;
> - }
> - } else {
> - PageAddress = BASE_4GB;
> - for (IndexOfPdpEntries = 0; IndexOfPdpEntries <
> (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512);
> IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
> - if ((IndexOfPml4Entries == 0) && (IndexOfPdpEntries < 4)) {
> - //
> - // Skip the < 4G entries
> - //
> - continue;
> - }
> -
> - //
> - // Each Directory Pointer entries points to a page of Page Directory entires.
> - // So allocate space for them and fill them in in the
> IndexOfPageDirectoryEntries loop.
> - //
> - PageDirectoryEntry = (UINT64 *)((*PageDirectoryPointerEntry) &
> ~mAddressEncMask & gPhyMask);
> - if (PageDirectoryEntry == NULL) {
> - PageDirectoryEntry = AllocatePageTableMemory (1);
> - ASSERT (PageDirectoryEntry != NULL);
> - ZeroMem (PageDirectoryEntry, EFI_PAGES_TO_SIZE (1));
> -
> - //
> - // Fill in a Page Directory Pointer Entries
> - //
> - *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry |
> mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - }
> -
> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512;
> IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress +=
> SIZE_2MB) {
> - //
> - // Fill in the Page Directory entries
> - //
> - *PageDirectoryEntry = PageAddress | mAddressEncMask | IA32_PG_PS |
> PAGE_ATTRIBUTE_BITS;
> - }
> - }
> - }
> - }
> - }
> -}
> -
> /**
> Create PageTable for SMM use.
>
> @@ -332,15 +178,16 @@ SmmInitPageTable (
> VOID
> )
> {
> - EFI_PHYSICAL_ADDRESS Pages;
> - UINT64 *PTEntry;
> + UINTN PageTable;
> LIST_ENTRY *FreePage;
> UINTN Index;
> UINTN PageFaultHandlerHookAddress;
> IA32_IDT_GATE_DESCRIPTOR *IdtEntry;
> EFI_STATUS Status;
> + UINT64 *PdptEntry;
> UINT64 *Pml4Entry;
> UINT64 *Pml5Entry;
> + UINT8 PhysicalAddressBits;
>
> //
> // Initialize spin lock
> @@ -357,59 +204,44 @@ SmmInitPageTable (
> } else {
> mPagingMode = m1GPageTableSupport ? Paging4Level1GB : Paging4Level;
> }
> +
> DEBUG ((DEBUG_INFO, "5LevelPaging Needed - %d\n",
> m5LevelPagingNeeded));
> DEBUG ((DEBUG_INFO, "1GPageTable Support - %d\n",
> m1GPageTableSupport));
> DEBUG ((DEBUG_INFO, "PcdCpuSmmRestrictedMemoryAccess - %d\n",
> mCpuSmmRestrictedMemoryAccess));
> DEBUG ((DEBUG_INFO, "PhysicalAddressBits - %d\n",
> mPhysicalAddressBits));
> - //
> - // Generate PAE page table for the first 4GB memory space
> - //
> - Pages = Gen4GPageTable (FALSE);
>
> //
> - // Set IA32_PG_PMNT bit to mask this entry
> + // Generate initial SMM page table.
> + // Only map [0, 4G] when PcdCpuSmmRestrictedMemoryAccess is FALSE.
> //
> - PTEntry = (UINT64 *)(UINTN)Pages;
> - for (Index = 0; Index < 4; Index++) {
> - PTEntry[Index] |= IA32_PG_PMNT;
> - }
> -
> - //
> - // Fill Page-Table-Level4 (PML4) entry
> - //
> - Pml4Entry = (UINT64 *)AllocatePageTableMemory (1);
> - ASSERT (Pml4Entry != NULL);
> - *Pml4Entry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - ZeroMem (Pml4Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml4Entry));
> -
> - //
> - // Set sub-entries number
> - //
> - SetSubEntriesNum (Pml4Entry, 3);
> - PTEntry = Pml4Entry;
> + PhysicalAddressBits = mCpuSmmRestrictedMemoryAccess ?
> mPhysicalAddressBits : 32;
> + PageTable = GenSmmPageTable (mPagingMode, PhysicalAddressBits);
>
> if (m5LevelPagingNeeded) {
> + Pml5Entry = (UINT64 *)PageTable;
> //
> - // Fill PML5 entry
> - //
> - Pml5Entry = (UINT64 *)AllocatePageTableMemory (1);
> - ASSERT (Pml5Entry != NULL);
> - *Pml5Entry = (UINTN)Pml4Entry | mAddressEncMask |
> PAGE_ATTRIBUTE_BITS;
> - ZeroMem (Pml5Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml5Entry));
> - //
> - // Set sub-entries number
> + // Set Pml5Entry sub-entries number for smm PF handler usage.
> //
> SetSubEntriesNum (Pml5Entry, 1);
> - PTEntry = Pml5Entry;
> + Pml4Entry = (UINT64 *)((*Pml5Entry) & ~mAddressEncMask & gPhyMask);
> + } else {
> + Pml4Entry = (UINT64 *)PageTable;
> + }
> +
> + //
> + // Set IA32_PG_PMNT bit to mask first 4 PdptEntry.
> + //
> + PdptEntry = (UINT64 *)((*Pml4Entry) & ~mAddressEncMask & gPhyMask);
> + for (Index = 0; Index < 4; Index++) {
> + PdptEntry[Index] |= IA32_PG_PMNT;
> }
>
> - if (mCpuSmmRestrictedMemoryAccess) {
> + if (!mCpuSmmRestrictedMemoryAccess) {
> //
> - // When access to non-SMRAM memory is restricted, create page table
> - // that covers all memory space.
> + // Set Pml4Entry sub-entries number for smm PF handler usage.
> //
> - SetStaticPageTable ((UINTN)PTEntry, mPhysicalAddressBits);
> - } else {
> + SetSubEntriesNum (Pml4Entry, 3);
> +
> //
> // Add pages to page pool
> //
> @@ -466,7 +298,7 @@ SmmInitPageTable (
> //
> // Return the address of PML4/PML5 (to set CR3)
> //
> - return (UINT32)(UINTN)PTEntry;
> + return (UINT32)PageTable;
> }
>
> /**
> --
> 2.31.1.windows.1
>
>
>
>
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 11/15] UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 page table
2023-05-16 9:59 ` [Patch V4 11/15] UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 " duntan
@ 2023-06-02 3:31 ` Ni, Ray
2023-06-02 3:37 ` duntan
0 siblings, 1 reply; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:31 UTC (permalink / raw)
To: devel@edk2.groups.io, Tan, Dun; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
> - mSmmS3ResumeState->SmmS3Cr3 = (UINT32)(UINTN)PTEntry;
> + mSmmS3ResumeState->SmmS3Cr3 = (UINT32)GenSmmPageTable
> (Paging4Level, 32);
Why is "Paging4Level" used for S3 page table?
The S3 page table is used by S3Resume module:
if (SmmS3ResumeState->Signature == SMM_S3_RESUME_SMM_64) {
//
// Switch to long mode to complete resume.
//
......
AsmWriteCr3 ((UINTN)SmmS3ResumeState->SmmS3Cr3);
......
AsmEnablePaging64 (
0x38,
SmmS3ResumeState->SmmS3ResumeEntryPoint,
(UINT64)(UINTN)AcpiS3Context,
0,
SmmS3ResumeState->SmmS3StackBase + SmmS3ResumeState->SmmS3StackSize
);
The S3 page table is only used when PEI runs in 32bit mode, which revolves my concern
that CPU in 64bit mode cannot switch from 5-l paging to 4-l paging.
And I guess your code just aligns to the old behavior.
Can you add comments to above to explain the SmmS3Cr3 is only used by S3Resume PEIM
to switch CPU from 32bit to 64bit?
With that, Reviewed-by: Ray Ni <ray.ni@intel.com>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 12/15] UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfo
2023-05-16 9:59 ` [Patch V4 12/15] UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfo duntan
@ 2023-06-02 3:33 ` Ni, Ray
2023-06-02 3:43 ` duntan
0 siblings, 1 reply; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:33 UTC (permalink / raw)
To: devel@edk2.groups.io, Tan, Dun; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
> + Buffer = AllocateZeroPool (sizeof (EFI_SMRAM_DESCRIPTOR));
> + ASSERT (Buffer != NULL);
You can define a local variable "EFI_SMRAM_DESCRIPTOR OneSmramDescriptor" to avoid pool allocation.
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock
2023-05-16 9:59 ` [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock duntan
@ 2023-06-02 3:34 ` Ni, Ray
2023-06-02 3:35 ` Ni, Ray
0 siblings, 1 reply; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:34 UTC (permalink / raw)
To: Tan, Dun, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Similar comments as patch #12.
You could avoid pool allocation.
> -----Original Message-----
> From: Tan, Dun <dun.tan@intel.com>
> Sent: Tuesday, May 16, 2023 6:00 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar, Rahul
> R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when
> ReadyToLock
>
> Sort mProtectionMemRange in InitProtectedMemRange() when
> ReadyToLock.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 35
> +++++++++++++++++++++++++++++++++++
> 1 file changed, 35 insertions(+)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> index 5625ba0cac..b298e2fb99 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> @@ -375,6 +375,32 @@ IsAddressSplit (
> return FALSE;
> }
>
> +/**
> + Function to compare 2 MEMORY_PROTECTION_RANGE based on range base.
> +
> + @param[in] Buffer1 pointer to Device Path poiner to compare
> + @param[in] Buffer2 pointer to second DevicePath pointer to compare
> +
> + @retval 0 Buffer1 equal to Buffer2
> + @retval <0 Buffer1 is less than Buffer2
> + @retval >0 Buffer1 is greater than Buffer2
> +**/
> +INTN
> +EFIAPI
> +ProtectionRangeCompare (
> + IN CONST VOID *Buffer1,
> + IN CONST VOID *Buffer2
> + )
> +{
> + if (((MEMORY_PROTECTION_RANGE *)Buffer1)->Range.Base >
> ((MEMORY_PROTECTION_RANGE *)Buffer2)->Range.Base) {
> + return 1;
> + } else if (((MEMORY_PROTECTION_RANGE *)Buffer1)->Range.Base <
> ((MEMORY_PROTECTION_RANGE *)Buffer2)->Range.Base) {
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> /**
> Initialize the protected memory ranges and the 4KB-page mapped memory
> ranges.
>
> @@ -397,6 +423,7 @@ InitProtectedMemRange (
> EFI_PHYSICAL_ADDRESS Base2MBAlignedAddress;
> UINT64 High4KBPageSize;
> UINT64 Low4KBPageSize;
> + VOID *Buffer;
>
> NumberOfDescriptors = 0;
> NumberOfAddedDescriptors = mSmmCpuSmramRangeCount;
> @@ -533,6 +560,14 @@ InitProtectedMemRange (
>
> mSplitMemRangeCount = NumberOfSpliteRange;
>
> + //
> + // Sort the mProtectionMemRange
> + //
> + Buffer = AllocateZeroPool (sizeof (MEMORY_PROTECTION_RANGE));
> + ASSERT (Buffer != NULL);
> + QuickSort (mProtectionMemRange, mProtectionMemRangeCount, sizeof
> (MEMORY_PROTECTION_RANGE),
> (BASE_SORT_COMPARE)ProtectionRangeCompare, Buffer);
> + FreePool (Buffer);
> +
> DEBUG ((DEBUG_INFO, "SMM Profile Memory Ranges:\n"));
> for (Index = 0; Index < mProtectionMemRangeCount; Index++) {
> DEBUG ((DEBUG_INFO, "mProtectionMemRange[%d].Base = %lx\n", Index,
> mProtectionMemRange[Index].Range.Base));
> --
> 2.31.1.windows.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock
2023-06-02 3:34 ` Ni, Ray
@ 2023-06-02 3:35 ` Ni, Ray
2023-06-02 3:55 ` duntan
0 siblings, 1 reply; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:35 UTC (permalink / raw)
To: Tan, Dun, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Why do you add the sort logic?
I thought you might have further changes to remove some unnecessary logic that deals with un-sorted array.
> -----Original Message-----
> From: Ni, Ray
> Sent: Friday, June 2, 2023 11:34 AM
> To: Tan, Dun <dun.tan@intel.com>; devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: RE: [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when
> ReadyToLock
>
> Similar comments as patch #12.
> You could avoid pool allocation.
>
> > -----Original Message-----
> > From: Tan, Dun <dun.tan@intel.com>
> > Sent: Tuesday, May 16, 2023 6:00 PM
> > To: devel@edk2.groups.io
> > Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar,
> Rahul
> > R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> > Subject: [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when
> > ReadyToLock
> >
> > Sort mProtectionMemRange in InitProtectedMemRange() when
> > ReadyToLock.
> >
> > Signed-off-by: Dun Tan <dun.tan@intel.com>
> > Cc: Eric Dong <eric.dong@intel.com>
> > Cc: Ray Ni <ray.ni@intel.com>
> > Cc: Rahul Kumar <rahul1.kumar@intel.com>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > ---
> > UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 35
> > +++++++++++++++++++++++++++++++++++
> > 1 file changed, 35 insertions(+)
> >
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> > index 5625ba0cac..b298e2fb99 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> > @@ -375,6 +375,32 @@ IsAddressSplit (
> > return FALSE;
> > }
> >
> > +/**
> > + Function to compare 2 MEMORY_PROTECTION_RANGE based on range base.
> > +
> > + @param[in] Buffer1 pointer to Device Path poiner to compare
> > + @param[in] Buffer2 pointer to second DevicePath pointer to compare
> > +
> > + @retval 0 Buffer1 equal to Buffer2
> > + @retval <0 Buffer1 is less than Buffer2
> > + @retval >0 Buffer1 is greater than Buffer2
> > +**/
> > +INTN
> > +EFIAPI
> > +ProtectionRangeCompare (
> > + IN CONST VOID *Buffer1,
> > + IN CONST VOID *Buffer2
> > + )
> > +{
> > + if (((MEMORY_PROTECTION_RANGE *)Buffer1)->Range.Base >
> > ((MEMORY_PROTECTION_RANGE *)Buffer2)->Range.Base) {
> > + return 1;
> > + } else if (((MEMORY_PROTECTION_RANGE *)Buffer1)->Range.Base <
> > ((MEMORY_PROTECTION_RANGE *)Buffer2)->Range.Base) {
> > + return -1;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > /**
> > Initialize the protected memory ranges and the 4KB-page mapped memory
> > ranges.
> >
> > @@ -397,6 +423,7 @@ InitProtectedMemRange (
> > EFI_PHYSICAL_ADDRESS Base2MBAlignedAddress;
> > UINT64 High4KBPageSize;
> > UINT64 Low4KBPageSize;
> > + VOID *Buffer;
> >
> > NumberOfDescriptors = 0;
> > NumberOfAddedDescriptors = mSmmCpuSmramRangeCount;
> > @@ -533,6 +560,14 @@ InitProtectedMemRange (
> >
> > mSplitMemRangeCount = NumberOfSpliteRange;
> >
> > + //
> > + // Sort the mProtectionMemRange
> > + //
> > + Buffer = AllocateZeroPool (sizeof (MEMORY_PROTECTION_RANGE));
> > + ASSERT (Buffer != NULL);
> > + QuickSort (mProtectionMemRange, mProtectionMemRangeCount, sizeof
> > (MEMORY_PROTECTION_RANGE),
> > (BASE_SORT_COMPARE)ProtectionRangeCompare, Buffer);
> > + FreePool (Buffer);
> > +
> > DEBUG ((DEBUG_INFO, "SMM Profile Memory Ranges:\n"));
> > for (Index = 0; Index < mProtectionMemRangeCount; Index++) {
> > DEBUG ((DEBUG_INFO, "mProtectionMemRange[%d].Base = %lx\n", Index,
> > mProtectionMemRange[Index].Range.Base));
> > --
> > 2.31.1.windows.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
2023-06-02 3:23 ` [edk2-devel] " Ni, Ray
@ 2023-06-02 3:36 ` duntan
2023-06-02 3:46 ` duntan
0 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-06-02 3:36 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
GenSmmPageTable() doesn't mark the "Guard page" in "mSmmShadowStackSize range" is to align with old behavior.
GenSmmPageTable() is also used to create SmmS3Cr3 and the "Guard page" in "mSmmShadowStackSize range" is not marked as non-present in SmmS3Cr3.
In old code logic, the "Guard page" in "mSmmShadowStackSize range" is marked as not-present after InitializeMpServiceData() creates the initial smm page table.
-----Original Message-----
From: Ni, Ray <ray.ni@intel.com>
Sent: Friday, June 2, 2023 11:23 AM
To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
//
// SMM Stack Guard Enabled
// Append Shadow Stack after normal stack
// 2 more pages is allocated for each processor, one is guard page and the other is known good shadow stack.
//
// |= Stacks
// +--------------------------------------------------+---------------------------------------------------------------+
// | Known Good Stack | Guard Page | SMM Stack | Known Good Shadow Stack | Guard Page | SMM Shadow Stack |
// +--------------------------------------------------+---------------------------------------------------------------+
// | 4K | 4K |PcdCpuSmmStackSize| 4K | 4K |PcdCpuSmmShadowStackSize|
// |<---------------- mSmmStackSize ----------------->|<--------------------- mSmmShadowStackSize ------------------->|
// | |
// |<-------------------------------------------- Processor N ------------------------------------------------------->|
//
GenSmmPageTable() only sets the "Guard page" in "mSmmStackSize range" as not-present.
But the "Guard page" in "mSmmShadowStackSize range" is not marked as not-present.
Why?
Thanks,
Ray
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of duntan
> Sent: Tuesday, May 16, 2023 5:59 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>;
> Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann
> <kraxel@redhat.com>
> Subject: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add
> GenSmmPageTable() to create smm page table
>
> This commit is code refinement to current smm pagetable generation
> code. Add a new GenSmmPageTable() API to create smm page table based
> on the PageTableMap() API in CpuPageTableLib. Caller only needs to
> specify the paging mode and the PhysicalAddressBits to map.
> This function can be used to create both IA32 pae paging and X64
> 5level, 4level paging.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 2 +-
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 15
> +++++++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 65
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 220
> ++++++++++++++++++++++++++--------------------------------------------
> ++++++++++++++++++++++++++---------------
> ----------------------------------------------------------------------
> ----------------------------
> -------------------------------------
> 4 files changed, 107 insertions(+), 195 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> index 9c8107080a..b11264ce4a 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> @@ -63,7 +63,7 @@ SmmInitPageTable (
> InitializeIDTSmmStackGuard ();
> }
>
> - return Gen4GPageTable (TRUE);
> + return GenSmmPageTable (PagingPae, mPhysicalAddressBits);
> }
>
> /**
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index a7da9673a5..5399659bc0 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -553,6 +553,21 @@ Gen4GPageTable (
> IN BOOLEAN Is32BitPageTable
> );
>
> +/**
> + Create page table based on input PagingMode and PhysicalAddressBits in smm.
> +
> + @param[in] PagingMode The paging mode.
> + @param[in] PhysicalAddressBits The bits of physical address to map.
> +
> + @retval PageTable Address
> +
> +**/
> +UINTN
> +GenSmmPageTable (
> + IN PAGING_MODE PagingMode,
> + IN UINT8 PhysicalAddressBits
> + );
> +
> /**
> Initialize global data for MP synchronization.
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index ef0ba9a355..138ff43c9d 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -1642,6 +1642,71 @@ EdkiiSmmClearMemoryAttributes (
> return SmmClearMemoryAttributes (BaseAddress, Length, Attributes);
> }
>
> +/**
> + Create page table based on input PagingMode and PhysicalAddressBits in smm.
> +
> + @param[in] PagingMode The paging mode.
> + @param[in] PhysicalAddressBits The bits of physical address to map.
> +
> + @retval PageTable Address
> +
> +**/
> +UINTN
> +GenSmmPageTable (
> + IN PAGING_MODE PagingMode,
> + IN UINT8 PhysicalAddressBits
> + )
> +{
> + UINTN PageTableBufferSize;
> + UINTN PageTable;
> + VOID *PageTableBuffer;
> + IA32_MAP_ATTRIBUTE MapAttribute;
> + IA32_MAP_ATTRIBUTE MapMask;
> + RETURN_STATUS Status;
> + UINTN GuardPage;
> + UINTN Index;
> + UINT64 Length;
> +
> + Length = LShiftU64 (1, PhysicalAddressBits);
> + PageTable = 0;
> + PageTableBufferSize = 0;
> + MapMask.Uint64 = MAX_UINT64;
> + MapAttribute.Uint64 = mAddressEncMask;
> + MapAttribute.Bits.Present = 1;
> + MapAttribute.Bits.ReadWrite = 1;
> + MapAttribute.Bits.UserSupervisor = 1;
> + MapAttribute.Bits.Accessed = 1;
> + MapAttribute.Bits.Dirty = 1;
> +
> + Status = PageTableMap (&PageTable, PagingMode, NULL,
> &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> + ASSERT (Status == RETURN_BUFFER_TOO_SMALL); DEBUG ((DEBUG_INFO,
> + "GenSMMPageTable: 0x%x bytes needed for initial
> SMM page table\n", PageTableBufferSize));
> + PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES
> (PageTableBufferSize));
> + ASSERT (PageTableBuffer != NULL);
> + Status = PageTableMap (&PageTable, PagingMode, PageTableBuffer,
> &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> + ASSERT (Status == RETURN_SUCCESS);
> + ASSERT (PageTableBufferSize == 0);
> +
> + if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
> + //
> + // Mark the 4KB guard page between known good stack and smm stack
> + as
> non-present
> + //
> + for (Index = 0; Index < gSmmCpuPrivate-
> >SmmCoreEntryContext.NumberOfCpus; Index++) {
> + GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE + Index *
> (mSmmStackSize + mSmmShadowStackSize);
> + Status = ConvertMemoryPageAttributes (PageTable, PagingMode,
> GuardPage, SIZE_4KB, EFI_MEMORY_RP, TRUE, NULL);
> + }
> + }
> +
> + if ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) {
> + //
> + // Mark [0, 4k] as non-present
> + //
> + Status = ConvertMemoryPageAttributes (PageTable, PagingMode, 0,
> + SIZE_4KB,
> EFI_MEMORY_RP, TRUE, NULL);
> + }
> +
> + return (UINTN)PageTable;
> +}
> +
> /**
> This function retrieves the attributes of the memory region specified by
> BaseAddress and Length. If different attributes are got from
> different part diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> index 25ced50955..060e6dc147 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> @@ -167,160 +167,6 @@ CalculateMaximumSupportAddress (
> return PhysicalAddressBits;
> }
>
> -/**
> - Set static page table.
> -
> - @param[in] PageTable Address of page table.
> - @param[in] PhysicalAddressBits The maximum physical address bits
> supported.
> -**/
> -VOID
> -SetStaticPageTable (
> - IN UINTN PageTable,
> - IN UINT8 PhysicalAddressBits
> - )
> -{
> - UINT64 PageAddress;
> - UINTN NumberOfPml5EntriesNeeded;
> - UINTN NumberOfPml4EntriesNeeded;
> - UINTN NumberOfPdpEntriesNeeded;
> - UINTN IndexOfPml5Entries;
> - UINTN IndexOfPml4Entries;
> - UINTN IndexOfPdpEntries;
> - UINTN IndexOfPageDirectoryEntries;
> - UINT64 *PageMapLevel5Entry;
> - UINT64 *PageMapLevel4Entry;
> - UINT64 *PageMap;
> - UINT64 *PageDirectoryPointerEntry;
> - UINT64 *PageDirectory1GEntry;
> - UINT64 *PageDirectoryEntry;
> -
> - //
> - // IA-32e paging translates 48-bit linear addresses to 52-bit
> physical addresses
> - // when 5-Level Paging is disabled.
> - //
> - ASSERT (PhysicalAddressBits <= 52);
> - if (!m5LevelPagingNeeded && (PhysicalAddressBits > 48)) {
> - PhysicalAddressBits = 48;
> - }
> -
> - NumberOfPml5EntriesNeeded = 1;
> - if (PhysicalAddressBits > 48) {
> - NumberOfPml5EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> 48);
> - PhysicalAddressBits = 48;
> - }
> -
> - NumberOfPml4EntriesNeeded = 1;
> - if (PhysicalAddressBits > 39) {
> - NumberOfPml4EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> 39);
> - PhysicalAddressBits = 39;
> - }
> -
> - NumberOfPdpEntriesNeeded = 1;
> - ASSERT (PhysicalAddressBits > 30);
> - NumberOfPdpEntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits
> - 30);
> -
> - //
> - // By architecture only one PageMapLevel4 exists - so lets allocate
> storage for it.
> - //
> - PageMap = (VOID *)PageTable;
> -
> - PageMapLevel4Entry = PageMap;
> - PageMapLevel5Entry = NULL;
> - if (m5LevelPagingNeeded) {
> - //
> - // By architecture only one PageMapLevel5 exists - so lets allocate storage for
> it.
> - //
> - PageMapLevel5Entry = PageMap;
> - }
> -
> - PageAddress = 0;
> -
> - for ( IndexOfPml5Entries = 0
> - ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded
> - ; IndexOfPml5Entries++, PageMapLevel5Entry++)
> - {
> - //
> - // Each PML5 entry points to a page of PML4 entires.
> - // So lets allocate space for them and fill them in in the IndexOfPml4Entries
> loop.
> - // When 5-Level Paging is disabled, below allocation happens only once.
> - //
> - if (m5LevelPagingNeeded) {
> - PageMapLevel4Entry = (UINT64 *)((*PageMapLevel5Entry) &
> ~mAddressEncMask & gPhyMask);
> - if (PageMapLevel4Entry == NULL) {
> - PageMapLevel4Entry = AllocatePageTableMemory (1);
> - ASSERT (PageMapLevel4Entry != NULL);
> - ZeroMem (PageMapLevel4Entry, EFI_PAGES_TO_SIZE (1));
> -
> - *PageMapLevel5Entry = (UINT64)(UINTN)PageMapLevel4Entry |
> mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - }
> - }
> -
> - for (IndexOfPml4Entries = 0; IndexOfPml4Entries <
> (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512);
> IndexOfPml4Entries++, PageMapLevel4Entry++) {
> - //
> - // Each PML4 entry points to a page of Page Directory Pointer entries.
> - //
> - PageDirectoryPointerEntry = (UINT64 *)((*PageMapLevel4Entry) &
> ~mAddressEncMask & gPhyMask);
> - if (PageDirectoryPointerEntry == NULL) {
> - PageDirectoryPointerEntry = AllocatePageTableMemory (1);
> - ASSERT (PageDirectoryPointerEntry != NULL);
> - ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE (1));
> -
> - *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry |
> mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - }
> -
> - if (m1GPageTableSupport) {
> - PageDirectory1GEntry = PageDirectoryPointerEntry;
> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512;
> IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress +=
> SIZE_1GB) {
> - if ((IndexOfPml4Entries == 0) && (IndexOfPageDirectoryEntries < 4)) {
> - //
> - // Skip the < 4G entries
> - //
> - continue;
> - }
> -
> - //
> - // Fill in the Page Directory entries
> - //
> - *PageDirectory1GEntry = PageAddress | mAddressEncMask | IA32_PG_PS
> | PAGE_ATTRIBUTE_BITS;
> - }
> - } else {
> - PageAddress = BASE_4GB;
> - for (IndexOfPdpEntries = 0; IndexOfPdpEntries <
> (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512);
> IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
> - if ((IndexOfPml4Entries == 0) && (IndexOfPdpEntries < 4)) {
> - //
> - // Skip the < 4G entries
> - //
> - continue;
> - }
> -
> - //
> - // Each Directory Pointer entries points to a page of Page Directory entires.
> - // So allocate space for them and fill them in in the
> IndexOfPageDirectoryEntries loop.
> - //
> - PageDirectoryEntry = (UINT64 *)((*PageDirectoryPointerEntry) &
> ~mAddressEncMask & gPhyMask);
> - if (PageDirectoryEntry == NULL) {
> - PageDirectoryEntry = AllocatePageTableMemory (1);
> - ASSERT (PageDirectoryEntry != NULL);
> - ZeroMem (PageDirectoryEntry, EFI_PAGES_TO_SIZE (1));
> -
> - //
> - // Fill in a Page Directory Pointer Entries
> - //
> - *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry |
> mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - }
> -
> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512;
> IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress +=
> SIZE_2MB) {
> - //
> - // Fill in the Page Directory entries
> - //
> - *PageDirectoryEntry = PageAddress | mAddressEncMask | IA32_PG_PS |
> PAGE_ATTRIBUTE_BITS;
> - }
> - }
> - }
> - }
> - }
> -}
> -
> /**
> Create PageTable for SMM use.
>
> @@ -332,15 +178,16 @@ SmmInitPageTable (
> VOID
> )
> {
> - EFI_PHYSICAL_ADDRESS Pages;
> - UINT64 *PTEntry;
> + UINTN PageTable;
> LIST_ENTRY *FreePage;
> UINTN Index;
> UINTN PageFaultHandlerHookAddress;
> IA32_IDT_GATE_DESCRIPTOR *IdtEntry;
> EFI_STATUS Status;
> + UINT64 *PdptEntry;
> UINT64 *Pml4Entry;
> UINT64 *Pml5Entry;
> + UINT8 PhysicalAddressBits;
>
> //
> // Initialize spin lock
> @@ -357,59 +204,44 @@ SmmInitPageTable (
> } else {
> mPagingMode = m1GPageTableSupport ? Paging4Level1GB : Paging4Level;
> }
> +
> DEBUG ((DEBUG_INFO, "5LevelPaging Needed - %d\n",
> m5LevelPagingNeeded));
> DEBUG ((DEBUG_INFO, "1GPageTable Support - %d\n",
> m1GPageTableSupport));
> DEBUG ((DEBUG_INFO, "PcdCpuSmmRestrictedMemoryAccess - %d\n",
> mCpuSmmRestrictedMemoryAccess));
> DEBUG ((DEBUG_INFO, "PhysicalAddressBits - %d\n",
> mPhysicalAddressBits));
> - //
> - // Generate PAE page table for the first 4GB memory space
> - //
> - Pages = Gen4GPageTable (FALSE);
>
> //
> - // Set IA32_PG_PMNT bit to mask this entry
> + // Generate initial SMM page table.
> + // Only map [0, 4G] when PcdCpuSmmRestrictedMemoryAccess is FALSE.
> //
> - PTEntry = (UINT64 *)(UINTN)Pages;
> - for (Index = 0; Index < 4; Index++) {
> - PTEntry[Index] |= IA32_PG_PMNT;
> - }
> -
> - //
> - // Fill Page-Table-Level4 (PML4) entry
> - //
> - Pml4Entry = (UINT64 *)AllocatePageTableMemory (1);
> - ASSERT (Pml4Entry != NULL);
> - *Pml4Entry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - ZeroMem (Pml4Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml4Entry));
> -
> - //
> - // Set sub-entries number
> - //
> - SetSubEntriesNum (Pml4Entry, 3);
> - PTEntry = Pml4Entry;
> + PhysicalAddressBits = mCpuSmmRestrictedMemoryAccess ?
> mPhysicalAddressBits : 32;
> + PageTable = GenSmmPageTable (mPagingMode, PhysicalAddressBits);
>
> if (m5LevelPagingNeeded) {
> + Pml5Entry = (UINT64 *)PageTable;
> //
> - // Fill PML5 entry
> - //
> - Pml5Entry = (UINT64 *)AllocatePageTableMemory (1);
> - ASSERT (Pml5Entry != NULL);
> - *Pml5Entry = (UINTN)Pml4Entry | mAddressEncMask |
> PAGE_ATTRIBUTE_BITS;
> - ZeroMem (Pml5Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml5Entry));
> - //
> - // Set sub-entries number
> + // Set Pml5Entry sub-entries number for smm PF handler usage.
> //
> SetSubEntriesNum (Pml5Entry, 1);
> - PTEntry = Pml5Entry;
> + Pml4Entry = (UINT64 *)((*Pml5Entry) & ~mAddressEncMask &
> + gPhyMask); } else {
> + Pml4Entry = (UINT64 *)PageTable;
> + }
> +
> + //
> + // Set IA32_PG_PMNT bit to mask first 4 PdptEntry.
> + //
> + PdptEntry = (UINT64 *)((*Pml4Entry) & ~mAddressEncMask & gPhyMask);
> + for (Index = 0; Index < 4; Index++) {
> + PdptEntry[Index] |= IA32_PG_PMNT;
> }
>
> - if (mCpuSmmRestrictedMemoryAccess) {
> + if (!mCpuSmmRestrictedMemoryAccess) {
> //
> - // When access to non-SMRAM memory is restricted, create page table
> - // that covers all memory space.
> + // Set Pml4Entry sub-entries number for smm PF handler usage.
> //
> - SetStaticPageTable ((UINTN)PTEntry, mPhysicalAddressBits);
> - } else {
> + SetSubEntriesNum (Pml4Entry, 3);
> +
> //
> // Add pages to page pool
> //
> @@ -466,7 +298,7 @@ SmmInitPageTable (
> //
> // Return the address of PML4/PML5 (to set CR3)
> //
> - return (UINT32)(UINTN)PTEntry;
> + return (UINT32)PageTable;
> }
>
> /**
> --
> 2.31.1.windows.1
>
>
>
>
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 09/15] UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h
2023-06-02 3:16 ` [edk2-devel] " Ni, Ray
@ 2023-06-02 3:36 ` duntan
0 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-06-02 3:36 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Ray,
The definition for mSmmShadowStackSize is in PiSmmCpuDxeSmm.c
I have tested the build process in my local and CI and both works.
Thanks,
Dun
-----Original Message-----
From: Ni, Ray <ray.ni@intel.com>
Sent: Friday, June 2, 2023 11:17 AM
To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: RE: [edk2-devel] [Patch V4 09/15] UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h
You removed all "extern" and added one "extern" in PiSmmCpuDxeSmm.h.
But, where is the mSmmShadowStackSize defined?
No link error?
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of duntan
> Sent: Tuesday, May 16, 2023 5:59 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>;
> Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann
> <kraxel@redhat.com>
> Subject: [edk2-devel] [Patch V4 09/15] UefiCpuPkg: Extern
> mSmmShadowStackSize in PiSmmCpuDxeSmm.h
>
> Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h and remove extern for
> mSmmShadowStackSize in c files to simplify code.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c | 3 +--
> UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 2 --
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 1 +
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 2 --
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c | 3 +--
> 5 files changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
> index 6c48a53f67..636dc8d92f 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/SmmFuncsArch.c
> @@ -1,7 +1,7 @@
> /** @file
> SMM CPU misc functions for Ia32 arch specific.
>
> -Copyright (c) 2015 - 2019, Intel Corporation. All rights
> reserved.<BR>
> +Copyright (c) 2015 - 2023, Intel Corporation. All rights
> +reserved.<BR>
> SPDX-License-Identifier: BSD-2-Clause-Patent
>
> **/
> @@ -14,7 +14,6 @@ EFI_PHYSICAL_ADDRESS mGdtBuffer;
> UINTN mGdtBufferSize;
>
> extern BOOLEAN mCetSupported;
> -extern UINTN mSmmShadowStackSize;
>
> X86_ASSEMBLY_PATCH_LABEL mPatchCetPl0Ssp; X86_ASSEMBLY_PATCH_LABEL
> mPatchCetInterruptSsp; diff --git
> a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> index baf827cf9d..1878252eac 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> @@ -29,8 +29,6 @@ MM_COMPLETION mSmmStartupThisApToken;
> //
> UINT32 *mPackageFirstThreadIndex = NULL;
>
> -extern UINTN mSmmShadowStackSize;
> -
> /**
> Performs an atomic compare exchange operation to get semaphore.
> The compare exchange operation must be performed using diff --git
> a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index e0c4ca76dc..a7da9673a5 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -262,6 +262,7 @@ extern EFI_SMM_CPU_PROTOCOL mSmmCpu;
> extern EFI_MM_MP_PROTOCOL mSmmMp;
> extern BOOLEAN m5LevelPagingNeeded;
> extern PAGING_MODE mPagingMode;
> +extern UINTN mSmmShadowStackSize;
>
> ///
> /// The mode of the CPU at the time an SMI occurs diff --git
> a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> index a25a96f68c..25ced50955 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> @@ -13,8 +13,6 @@ SPDX-License-Identifier: BSD-2-Clause-Patent
> #define PAGE_TABLE_PAGES 8
> #define ACC_MAX_BIT BIT3
>
> -extern UINTN mSmmShadowStackSize;
> -
> LIST_ENTRY mPagePool = INITIALIZE_LIST_HEAD_VARIABLE
> (mPagePool);
> BOOLEAN m1GPageTableSupport = FALSE;
> BOOLEAN mCpuSmmRestrictedMemoryAccess;
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
> index 00a284c369..c4f21e2155 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmFuncsArch.c
> @@ -1,7 +1,7 @@
> /** @file
> SMM CPU misc functions for x64 arch specific.
>
> -Copyright (c) 2015 - 2019, Intel Corporation. All rights
> reserved.<BR>
> +Copyright (c) 2015 - 2023, Intel Corporation. All rights
> +reserved.<BR>
> SPDX-License-Identifier: BSD-2-Clause-Patent
>
> **/
> @@ -12,7 +12,6 @@ EFI_PHYSICAL_ADDRESS mGdtBuffer;
> UINTN mGdtBufferSize;
>
> extern BOOLEAN mCetSupported;
> -extern UINTN mSmmShadowStackSize;
>
> X86_ASSEMBLY_PATCH_LABEL mPatchCetPl0Ssp; X86_ASSEMBLY_PATCH_LABEL
> mPatchCetInterruptSsp;
> --
> 2.31.1.windows.1
>
>
>
>
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 11/15] UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 page table
2023-06-02 3:31 ` [edk2-devel] " Ni, Ray
@ 2023-06-02 3:37 ` duntan
0 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-06-02 3:37 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Sure, it's to align with the old behavior. Will add comments to explain it.
Thanks,
Dun
-----Original Message-----
From: Ni, Ray <ray.ni@intel.com>
Sent: Friday, June 2, 2023 11:31 AM
To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: RE: [edk2-devel] [Patch V4 11/15] UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 page table
> - mSmmS3ResumeState->SmmS3Cr3 = (UINT32)(UINTN)PTEntry;
> + mSmmS3ResumeState->SmmS3Cr3 = (UINT32)GenSmmPageTable
> (Paging4Level, 32);
Why is "Paging4Level" used for S3 page table?
The S3 page table is used by S3Resume module:
if (SmmS3ResumeState->Signature == SMM_S3_RESUME_SMM_64) {
//
// Switch to long mode to complete resume.
//
......
AsmWriteCr3 ((UINTN)SmmS3ResumeState->SmmS3Cr3);
......
AsmEnablePaging64 (
0x38,
SmmS3ResumeState->SmmS3ResumeEntryPoint,
(UINT64)(UINTN)AcpiS3Context,
0,
SmmS3ResumeState->SmmS3StackBase + SmmS3ResumeState->SmmS3StackSize
);
The S3 page table is only used when PEI runs in 32bit mode, which revolves my concern that CPU in 64bit mode cannot switch from 5-l paging to 4-l paging.
And I guess your code just aligns to the old behavior.
Can you add comments to above to explain the SmmS3Cr3 is only used by S3Resume PEIM to switch CPU from 32bit to 64bit?
With that, Reviewed-by: Ray Ni <ray.ni@intel.com>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 12/15] UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfo
2023-06-02 3:33 ` [edk2-devel] " Ni, Ray
@ 2023-06-02 3:43 ` duntan
0 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-06-02 3:43 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Thanks for the comments. Will update the code in next version patch.
Thanks,
Dun
-----Original Message-----
From: Ni, Ray <ray.ni@intel.com>
Sent: Friday, June 2, 2023 11:34 AM
To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: RE: [edk2-devel] [Patch V4 12/15] UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfo
> + Buffer = AllocateZeroPool (sizeof (EFI_SMRAM_DESCRIPTOR)); ASSERT
> + (Buffer != NULL);
You can define a local variable "EFI_SMRAM_DESCRIPTOR OneSmramDescriptor" to avoid pool allocation.
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
2023-06-02 3:36 ` duntan
@ 2023-06-02 3:46 ` duntan
2023-06-02 5:08 ` Ni, Ray
0 siblings, 1 reply; 44+ messages in thread
From: duntan @ 2023-06-02 3:46 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Edited the reply to make it clearer.
-----Original Message-----
From: Tan, Dun
Sent: Friday, June 2, 2023 11:36 AM
To: Ni, Ray <ray.ni@intel.com>; devel@edk2.groups.io
Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
GenSmmPageTable() doesn't mark the "Guard page" in "mSmmShadowStackSize range" is to align with old behavior.
GenSmmPageTable() is also used to create SmmS3Cr3 and the "Guard page" in "mSmmShadowStackSize range" is not marked as non-present in SmmS3Cr3.
In the code logic, the "Guard page" in "mSmmShadowStackSize range" is marked as not-present after InitializeMpServiceData() creates the initial smm page table. This process is only done for smm runtime page table.
Thanks,
Dun
-----Original Message-----
From: Ni, Ray <ray.ni@intel.com>
Sent: Friday, June 2, 2023 11:23 AM
To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
//
// SMM Stack Guard Enabled
// Append Shadow Stack after normal stack
// 2 more pages is allocated for each processor, one is guard page and the other is known good shadow stack.
//
// |= Stacks
// +--------------------------------------------------+---------------------------------------------------------------+
// | Known Good Stack | Guard Page | SMM Stack | Known Good Shadow Stack | Guard Page | SMM Shadow Stack |
// +--------------------------------------------------+---------------------------------------------------------------+
// | 4K | 4K |PcdCpuSmmStackSize| 4K | 4K |PcdCpuSmmShadowStackSize|
// |<---------------- mSmmStackSize ----------------->|<--------------------- mSmmShadowStackSize ------------------->|
// | |
// |<-------------------------------------------- Processor N ------------------------------------------------------->|
//
GenSmmPageTable() only sets the "Guard page" in "mSmmStackSize range" as not-present.
But the "Guard page" in "mSmmShadowStackSize range" is not marked as not-present.
Why?
Thanks,
Ray
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of duntan
> Sent: Tuesday, May 16, 2023 5:59 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>;
> Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann
> <kraxel@redhat.com>
> Subject: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add
> GenSmmPageTable() to create smm page table
>
> This commit is code refinement to current smm pagetable generation
> code. Add a new GenSmmPageTable() API to create smm page table based
> on the PageTableMap() API in CpuPageTableLib. Caller only needs to
> specify the paging mode and the PhysicalAddressBits to map.
> This function can be used to create both IA32 pae paging and X64
> 5level, 4level paging.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 2 +-
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 15
> +++++++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 65
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 220
> ++++++++++++++++++++++++++--------------------------------------------
> ++++++++++++++++++++++++++---------------
> ----------------------------------------------------------------------
> ----------------------------
> -------------------------------------
> 4 files changed, 107 insertions(+), 195 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> index 9c8107080a..b11264ce4a 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> @@ -63,7 +63,7 @@ SmmInitPageTable (
> InitializeIDTSmmStackGuard ();
> }
>
> - return Gen4GPageTable (TRUE);
> + return GenSmmPageTable (PagingPae, mPhysicalAddressBits);
> }
>
> /**
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index a7da9673a5..5399659bc0 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -553,6 +553,21 @@ Gen4GPageTable (
> IN BOOLEAN Is32BitPageTable
> );
>
> +/**
> + Create page table based on input PagingMode and PhysicalAddressBits in smm.
> +
> + @param[in] PagingMode The paging mode.
> + @param[in] PhysicalAddressBits The bits of physical address to map.
> +
> + @retval PageTable Address
> +
> +**/
> +UINTN
> +GenSmmPageTable (
> + IN PAGING_MODE PagingMode,
> + IN UINT8 PhysicalAddressBits
> + );
> +
> /**
> Initialize global data for MP synchronization.
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index ef0ba9a355..138ff43c9d 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -1642,6 +1642,71 @@ EdkiiSmmClearMemoryAttributes (
> return SmmClearMemoryAttributes (BaseAddress, Length, Attributes);
> }
>
> +/**
> + Create page table based on input PagingMode and PhysicalAddressBits in smm.
> +
> + @param[in] PagingMode The paging mode.
> + @param[in] PhysicalAddressBits The bits of physical address to map.
> +
> + @retval PageTable Address
> +
> +**/
> +UINTN
> +GenSmmPageTable (
> + IN PAGING_MODE PagingMode,
> + IN UINT8 PhysicalAddressBits
> + )
> +{
> + UINTN PageTableBufferSize;
> + UINTN PageTable;
> + VOID *PageTableBuffer;
> + IA32_MAP_ATTRIBUTE MapAttribute;
> + IA32_MAP_ATTRIBUTE MapMask;
> + RETURN_STATUS Status;
> + UINTN GuardPage;
> + UINTN Index;
> + UINT64 Length;
> +
> + Length = LShiftU64 (1, PhysicalAddressBits);
> + PageTable = 0;
> + PageTableBufferSize = 0;
> + MapMask.Uint64 = MAX_UINT64;
> + MapAttribute.Uint64 = mAddressEncMask;
> + MapAttribute.Bits.Present = 1;
> + MapAttribute.Bits.ReadWrite = 1;
> + MapAttribute.Bits.UserSupervisor = 1;
> + MapAttribute.Bits.Accessed = 1;
> + MapAttribute.Bits.Dirty = 1;
> +
> + Status = PageTableMap (&PageTable, PagingMode, NULL,
> &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> + ASSERT (Status == RETURN_BUFFER_TOO_SMALL); DEBUG ((DEBUG_INFO,
> + "GenSMMPageTable: 0x%x bytes needed for initial
> SMM page table\n", PageTableBufferSize));
> + PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES
> (PageTableBufferSize));
> + ASSERT (PageTableBuffer != NULL);
> + Status = PageTableMap (&PageTable, PagingMode, PageTableBuffer,
> &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> + ASSERT (Status == RETURN_SUCCESS);
> + ASSERT (PageTableBufferSize == 0);
> +
> + if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
> + //
> + // Mark the 4KB guard page between known good stack and smm stack
> + as
> non-present
> + //
> + for (Index = 0; Index < gSmmCpuPrivate-
> >SmmCoreEntryContext.NumberOfCpus; Index++) {
> + GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE + Index *
> (mSmmStackSize + mSmmShadowStackSize);
> + Status = ConvertMemoryPageAttributes (PageTable, PagingMode,
> GuardPage, SIZE_4KB, EFI_MEMORY_RP, TRUE, NULL);
> + }
> + }
> +
> + if ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) {
> + //
> + // Mark [0, 4k] as non-present
> + //
> + Status = ConvertMemoryPageAttributes (PageTable, PagingMode, 0,
> + SIZE_4KB,
> EFI_MEMORY_RP, TRUE, NULL);
> + }
> +
> + return (UINTN)PageTable;
> +}
> +
> /**
> This function retrieves the attributes of the memory region specified by
> BaseAddress and Length. If different attributes are got from
> different part diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> index 25ced50955..060e6dc147 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> @@ -167,160 +167,6 @@ CalculateMaximumSupportAddress (
> return PhysicalAddressBits;
> }
>
> -/**
> - Set static page table.
> -
> - @param[in] PageTable Address of page table.
> - @param[in] PhysicalAddressBits The maximum physical address bits
> supported.
> -**/
> -VOID
> -SetStaticPageTable (
> - IN UINTN PageTable,
> - IN UINT8 PhysicalAddressBits
> - )
> -{
> - UINT64 PageAddress;
> - UINTN NumberOfPml5EntriesNeeded;
> - UINTN NumberOfPml4EntriesNeeded;
> - UINTN NumberOfPdpEntriesNeeded;
> - UINTN IndexOfPml5Entries;
> - UINTN IndexOfPml4Entries;
> - UINTN IndexOfPdpEntries;
> - UINTN IndexOfPageDirectoryEntries;
> - UINT64 *PageMapLevel5Entry;
> - UINT64 *PageMapLevel4Entry;
> - UINT64 *PageMap;
> - UINT64 *PageDirectoryPointerEntry;
> - UINT64 *PageDirectory1GEntry;
> - UINT64 *PageDirectoryEntry;
> -
> - //
> - // IA-32e paging translates 48-bit linear addresses to 52-bit
> physical addresses
> - // when 5-Level Paging is disabled.
> - //
> - ASSERT (PhysicalAddressBits <= 52);
> - if (!m5LevelPagingNeeded && (PhysicalAddressBits > 48)) {
> - PhysicalAddressBits = 48;
> - }
> -
> - NumberOfPml5EntriesNeeded = 1;
> - if (PhysicalAddressBits > 48) {
> - NumberOfPml5EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> 48);
> - PhysicalAddressBits = 48;
> - }
> -
> - NumberOfPml4EntriesNeeded = 1;
> - if (PhysicalAddressBits > 39) {
> - NumberOfPml4EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> 39);
> - PhysicalAddressBits = 39;
> - }
> -
> - NumberOfPdpEntriesNeeded = 1;
> - ASSERT (PhysicalAddressBits > 30);
> - NumberOfPdpEntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits
> - 30);
> -
> - //
> - // By architecture only one PageMapLevel4 exists - so lets allocate
> storage for it.
> - //
> - PageMap = (VOID *)PageTable;
> -
> - PageMapLevel4Entry = PageMap;
> - PageMapLevel5Entry = NULL;
> - if (m5LevelPagingNeeded) {
> - //
> - // By architecture only one PageMapLevel5 exists - so lets allocate storage for
> it.
> - //
> - PageMapLevel5Entry = PageMap;
> - }
> -
> - PageAddress = 0;
> -
> - for ( IndexOfPml5Entries = 0
> - ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded
> - ; IndexOfPml5Entries++, PageMapLevel5Entry++)
> - {
> - //
> - // Each PML5 entry points to a page of PML4 entires.
> - // So lets allocate space for them and fill them in in the IndexOfPml4Entries
> loop.
> - // When 5-Level Paging is disabled, below allocation happens only once.
> - //
> - if (m5LevelPagingNeeded) {
> - PageMapLevel4Entry = (UINT64 *)((*PageMapLevel5Entry) &
> ~mAddressEncMask & gPhyMask);
> - if (PageMapLevel4Entry == NULL) {
> - PageMapLevel4Entry = AllocatePageTableMemory (1);
> - ASSERT (PageMapLevel4Entry != NULL);
> - ZeroMem (PageMapLevel4Entry, EFI_PAGES_TO_SIZE (1));
> -
> - *PageMapLevel5Entry = (UINT64)(UINTN)PageMapLevel4Entry |
> mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - }
> - }
> -
> - for (IndexOfPml4Entries = 0; IndexOfPml4Entries <
> (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512);
> IndexOfPml4Entries++, PageMapLevel4Entry++) {
> - //
> - // Each PML4 entry points to a page of Page Directory Pointer entries.
> - //
> - PageDirectoryPointerEntry = (UINT64 *)((*PageMapLevel4Entry) &
> ~mAddressEncMask & gPhyMask);
> - if (PageDirectoryPointerEntry == NULL) {
> - PageDirectoryPointerEntry = AllocatePageTableMemory (1);
> - ASSERT (PageDirectoryPointerEntry != NULL);
> - ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE (1));
> -
> - *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry |
> mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - }
> -
> - if (m1GPageTableSupport) {
> - PageDirectory1GEntry = PageDirectoryPointerEntry;
> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512;
> IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress +=
> SIZE_1GB) {
> - if ((IndexOfPml4Entries == 0) && (IndexOfPageDirectoryEntries < 4)) {
> - //
> - // Skip the < 4G entries
> - //
> - continue;
> - }
> -
> - //
> - // Fill in the Page Directory entries
> - //
> - *PageDirectory1GEntry = PageAddress | mAddressEncMask | IA32_PG_PS
> | PAGE_ATTRIBUTE_BITS;
> - }
> - } else {
> - PageAddress = BASE_4GB;
> - for (IndexOfPdpEntries = 0; IndexOfPdpEntries <
> (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512);
> IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
> - if ((IndexOfPml4Entries == 0) && (IndexOfPdpEntries < 4)) {
> - //
> - // Skip the < 4G entries
> - //
> - continue;
> - }
> -
> - //
> - // Each Directory Pointer entries points to a page of Page Directory entires.
> - // So allocate space for them and fill them in in the
> IndexOfPageDirectoryEntries loop.
> - //
> - PageDirectoryEntry = (UINT64 *)((*PageDirectoryPointerEntry) &
> ~mAddressEncMask & gPhyMask);
> - if (PageDirectoryEntry == NULL) {
> - PageDirectoryEntry = AllocatePageTableMemory (1);
> - ASSERT (PageDirectoryEntry != NULL);
> - ZeroMem (PageDirectoryEntry, EFI_PAGES_TO_SIZE (1));
> -
> - //
> - // Fill in a Page Directory Pointer Entries
> - //
> - *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry |
> mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - }
> -
> - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512;
> IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress +=
> SIZE_2MB) {
> - //
> - // Fill in the Page Directory entries
> - //
> - *PageDirectoryEntry = PageAddress | mAddressEncMask | IA32_PG_PS |
> PAGE_ATTRIBUTE_BITS;
> - }
> - }
> - }
> - }
> - }
> -}
> -
> /**
> Create PageTable for SMM use.
>
> @@ -332,15 +178,16 @@ SmmInitPageTable (
> VOID
> )
> {
> - EFI_PHYSICAL_ADDRESS Pages;
> - UINT64 *PTEntry;
> + UINTN PageTable;
> LIST_ENTRY *FreePage;
> UINTN Index;
> UINTN PageFaultHandlerHookAddress;
> IA32_IDT_GATE_DESCRIPTOR *IdtEntry;
> EFI_STATUS Status;
> + UINT64 *PdptEntry;
> UINT64 *Pml4Entry;
> UINT64 *Pml5Entry;
> + UINT8 PhysicalAddressBits;
>
> //
> // Initialize spin lock
> @@ -357,59 +204,44 @@ SmmInitPageTable (
> } else {
> mPagingMode = m1GPageTableSupport ? Paging4Level1GB : Paging4Level;
> }
> +
> DEBUG ((DEBUG_INFO, "5LevelPaging Needed - %d\n",
> m5LevelPagingNeeded));
> DEBUG ((DEBUG_INFO, "1GPageTable Support - %d\n",
> m1GPageTableSupport));
> DEBUG ((DEBUG_INFO, "PcdCpuSmmRestrictedMemoryAccess - %d\n",
> mCpuSmmRestrictedMemoryAccess));
> DEBUG ((DEBUG_INFO, "PhysicalAddressBits - %d\n",
> mPhysicalAddressBits));
> - //
> - // Generate PAE page table for the first 4GB memory space
> - //
> - Pages = Gen4GPageTable (FALSE);
>
> //
> - // Set IA32_PG_PMNT bit to mask this entry
> + // Generate initial SMM page table.
> + // Only map [0, 4G] when PcdCpuSmmRestrictedMemoryAccess is FALSE.
> //
> - PTEntry = (UINT64 *)(UINTN)Pages;
> - for (Index = 0; Index < 4; Index++) {
> - PTEntry[Index] |= IA32_PG_PMNT;
> - }
> -
> - //
> - // Fill Page-Table-Level4 (PML4) entry
> - //
> - Pml4Entry = (UINT64 *)AllocatePageTableMemory (1);
> - ASSERT (Pml4Entry != NULL);
> - *Pml4Entry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> - ZeroMem (Pml4Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml4Entry));
> -
> - //
> - // Set sub-entries number
> - //
> - SetSubEntriesNum (Pml4Entry, 3);
> - PTEntry = Pml4Entry;
> + PhysicalAddressBits = mCpuSmmRestrictedMemoryAccess ?
> mPhysicalAddressBits : 32;
> + PageTable = GenSmmPageTable (mPagingMode, PhysicalAddressBits);
>
> if (m5LevelPagingNeeded) {
> + Pml5Entry = (UINT64 *)PageTable;
> //
> - // Fill PML5 entry
> - //
> - Pml5Entry = (UINT64 *)AllocatePageTableMemory (1);
> - ASSERT (Pml5Entry != NULL);
> - *Pml5Entry = (UINTN)Pml4Entry | mAddressEncMask |
> PAGE_ATTRIBUTE_BITS;
> - ZeroMem (Pml5Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml5Entry));
> - //
> - // Set sub-entries number
> + // Set Pml5Entry sub-entries number for smm PF handler usage.
> //
> SetSubEntriesNum (Pml5Entry, 1);
> - PTEntry = Pml5Entry;
> + Pml4Entry = (UINT64 *)((*Pml5Entry) & ~mAddressEncMask &
> + gPhyMask); } else {
> + Pml4Entry = (UINT64 *)PageTable;
> + }
> +
> + //
> + // Set IA32_PG_PMNT bit to mask first 4 PdptEntry.
> + //
> + PdptEntry = (UINT64 *)((*Pml4Entry) & ~mAddressEncMask & gPhyMask);
> + for (Index = 0; Index < 4; Index++) {
> + PdptEntry[Index] |= IA32_PG_PMNT;
> }
>
> - if (mCpuSmmRestrictedMemoryAccess) {
> + if (!mCpuSmmRestrictedMemoryAccess) {
> //
> - // When access to non-SMRAM memory is restricted, create page table
> - // that covers all memory space.
> + // Set Pml4Entry sub-entries number for smm PF handler usage.
> //
> - SetStaticPageTable ((UINTN)PTEntry, mPhysicalAddressBits);
> - } else {
> + SetSubEntriesNum (Pml4Entry, 3);
> +
> //
> // Add pages to page pool
> //
> @@ -466,7 +298,7 @@ SmmInitPageTable (
> //
> // Return the address of PML4/PML5 (to set CR3)
> //
> - return (UINT32)(UINTN)PTEntry;
> + return (UINT32)PageTable;
> }
>
> /**
> --
> 2.31.1.windows.1
>
>
>
>
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 14/15] UefiCpuPkg: Refinement to smm runtime InitPaging() code
2023-05-16 9:59 ` [Patch V4 14/15] UefiCpuPkg: Refinement to smm runtime InitPaging() code duntan
@ 2023-06-02 3:54 ` Ni, Ray
2023-06-02 3:59 ` duntan
0 siblings, 1 reply; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:54 UTC (permalink / raw)
To: devel@edk2.groups.io, Tan, Dun; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
> + } else {
> + MemoryAttrMask = EFI_MEMORY_XP;
> + for (Index = 0; Index < mSmmCpuSmramRangeCount; Index++) {
> + Base = mSmmCpuSmramRanges[Index].CpuStart;
> + if ((Base > PreviousAddress) && mXdSupported) {
Is "mXdSupported" check really needed? But you didn't add that check for the last remaining range.
ConvertMemoryPageAttributes() can handle the case when XD is not supported by CPU.
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock
2023-06-02 3:35 ` Ni, Ray
@ 2023-06-02 3:55 ` duntan
0 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-06-02 3:55 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Because the code logic in InitPaging() in Patch 14/15 requires that the two arrays have been sorted.
Yes, the code logic in FindSmramInfo() to deals with un-sorted mSmmCpuSmramRanges array can removed. Will add more changes to do this.
Thanks,
Dun
-----Original Message-----
From: Ni, Ray <ray.ni@intel.com>
Sent: Friday, June 2, 2023 11:36 AM
To: Tan, Dun <dun.tan@intel.com>; devel@edk2.groups.io
Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: RE: [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock
Why do you add the sort logic?
I thought you might have further changes to remove some unnecessary logic that deals with un-sorted array.
> -----Original Message-----
> From: Ni, Ray
> Sent: Friday, June 2, 2023 11:34 AM
> To: Tan, Dun <dun.tan@intel.com>; devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: RE: [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange
> when ReadyToLock
>
> Similar comments as patch #12.
> You could avoid pool allocation.
>
> > -----Original Message-----
> > From: Tan, Dun <dun.tan@intel.com>
> > Sent: Tuesday, May 16, 2023 6:00 PM
> > To: devel@edk2.groups.io
> > Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>;
> > Kumar,
> Rahul
> > R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> > Subject: [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when
> > ReadyToLock
> >
> > Sort mProtectionMemRange in InitProtectedMemRange() when
> > ReadyToLock.
> >
> > Signed-off-by: Dun Tan <dun.tan@intel.com>
> > Cc: Eric Dong <eric.dong@intel.com>
> > Cc: Ray Ni <ray.ni@intel.com>
> > Cc: Rahul Kumar <rahul1.kumar@intel.com>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > ---
> > UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 35
> > +++++++++++++++++++++++++++++++++++
> > 1 file changed, 35 insertions(+)
> >
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> > index 5625ba0cac..b298e2fb99 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> > @@ -375,6 +375,32 @@ IsAddressSplit (
> > return FALSE;
> > }
> >
> > +/**
> > + Function to compare 2 MEMORY_PROTECTION_RANGE based on range base.
> > +
> > + @param[in] Buffer1 pointer to Device Path poiner to compare
> > + @param[in] Buffer2 pointer to second DevicePath pointer to compare
> > +
> > + @retval 0 Buffer1 equal to Buffer2
> > + @retval <0 Buffer1 is less than Buffer2
> > + @retval >0 Buffer1 is greater than Buffer2
> > +**/
> > +INTN
> > +EFIAPI
> > +ProtectionRangeCompare (
> > + IN CONST VOID *Buffer1,
> > + IN CONST VOID *Buffer2
> > + )
> > +{
> > + if (((MEMORY_PROTECTION_RANGE *)Buffer1)->Range.Base >
> > ((MEMORY_PROTECTION_RANGE *)Buffer2)->Range.Base) {
> > + return 1;
> > + } else if (((MEMORY_PROTECTION_RANGE *)Buffer1)->Range.Base <
> > ((MEMORY_PROTECTION_RANGE *)Buffer2)->Range.Base) {
> > + return -1;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > /**
> > Initialize the protected memory ranges and the 4KB-page mapped
> > memory ranges.
> >
> > @@ -397,6 +423,7 @@ InitProtectedMemRange (
> > EFI_PHYSICAL_ADDRESS Base2MBAlignedAddress;
> > UINT64 High4KBPageSize;
> > UINT64 Low4KBPageSize;
> > + VOID *Buffer;
> >
> > NumberOfDescriptors = 0;
> > NumberOfAddedDescriptors = mSmmCpuSmramRangeCount; @@ -533,6
> > +560,14 @@ InitProtectedMemRange (
> >
> > mSplitMemRangeCount = NumberOfSpliteRange;
> >
> > + //
> > + // Sort the mProtectionMemRange
> > + //
> > + Buffer = AllocateZeroPool (sizeof (MEMORY_PROTECTION_RANGE));
> > + ASSERT (Buffer != NULL); QuickSort (mProtectionMemRange,
> > + mProtectionMemRangeCount, sizeof
> > (MEMORY_PROTECTION_RANGE),
> > (BASE_SORT_COMPARE)ProtectionRangeCompare, Buffer);
> > + FreePool (Buffer);
> > +
> > DEBUG ((DEBUG_INFO, "SMM Profile Memory Ranges:\n"));
> > for (Index = 0; Index < mProtectionMemRangeCount; Index++) {
> > DEBUG ((DEBUG_INFO, "mProtectionMemRange[%d].Base = %lx\n",
> > Index, mProtectionMemRange[Index].Range.Base));
> > --
> > 2.31.1.windows.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [Patch V4 15/15] UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary function
2023-05-16 9:59 ` [Patch V4 15/15] UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary function duntan
@ 2023-06-02 3:55 ` Ni, Ray
0 siblings, 0 replies; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 3:55 UTC (permalink / raw)
To: Tan, Dun, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Reviewed-by: Ray Ni <ray.ni@intel.com>
> -----Original Message-----
> From: Tan, Dun <dun.tan@intel.com>
> Sent: Tuesday, May 16, 2023 6:00 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Kumar, Rahul
> R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: [Patch V4 15/15] UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary
> function
>
> Remove unnecessary function SetNotPresentPage(). We can directly
> use ConvertMemoryPageAttributes to set a range to non-present.
>
> Signed-off-by: Dun Tan <dun.tan@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 8 ++++++--
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 16 ----------------
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 22 ------------
> ----------
> 3 files changed, 6 insertions(+), 40 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> index d69e976269..7fa1867b63 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> @@ -1074,10 +1074,14 @@ PiCpuSmmEntry (
> mSmmShadowStackSize
> );
> if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
> - SetNotPresentPage (
> + ConvertMemoryPageAttributes (
> Cr3,
> + mPagingMode,
> (EFI_PHYSICAL_ADDRESS)(UINTN)Stacks + mSmmStackSize +
> EFI_PAGES_TO_SIZE (1) + (mSmmStackSize + mSmmShadowStackSize) * Index,
> - EFI_PAGES_TO_SIZE (1)
> + EFI_PAGES_TO_SIZE (1),
> + EFI_MEMORY_RP,
> + TRUE,
> + NULL
> );
> }
> }
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index 12ad86028e..0dc4d758cc 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -1247,22 +1247,6 @@ SetShadowStack (
> IN UINT64 Length
> );
>
> -/**
> - Set not present memory.
> -
> - @param[in] Cr3 The page table base address.
> - @param[in] BaseAddress The physical address that is the start address of a
> memory region.
> - @param[in] Length The size in bytes of the memory region.
> -
> - @retval EFI_SUCCESS The not present memory is set.
> -**/
> -EFI_STATUS
> -SetNotPresentPage (
> - IN UINTN Cr3,
> - IN EFI_PHYSICAL_ADDRESS BaseAddress,
> - IN UINT64 Length
> - );
> -
> /**
> Initialize the shadow stack related data structure.
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index 138ff43c9d..95de472ebf 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -752,28 +752,6 @@ SetShadowStack (
> return Status;
> }
>
> -/**
> - Set not present memory.
> -
> - @param[in] Cr3 The page table base address.
> - @param[in] BaseAddress The physical address that is the start address of a
> memory region.
> - @param[in] Length The size in bytes of the memory region.
> -
> - @retval EFI_SUCCESS The not present memory is set.
> -**/
> -EFI_STATUS
> -SetNotPresentPage (
> - IN UINTN Cr3,
> - IN EFI_PHYSICAL_ADDRESS BaseAddress,
> - IN UINT64 Length
> - )
> -{
> - EFI_STATUS Status;
> -
> - Status = SmmSetMemoryAttributesEx (Cr3, mPagingMode, BaseAddress,
> Length, EFI_MEMORY_RP);
> - return Status;
> -}
> -
> /**
> Retrieves a pointer to the system configuration table from the SMM System
> Table
> based on a specified GUID.
> --
> 2.31.1.windows.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 14/15] UefiCpuPkg: Refinement to smm runtime InitPaging() code
2023-06-02 3:54 ` [edk2-devel] " Ni, Ray
@ 2023-06-02 3:59 ` duntan
0 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-06-02 3:59 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
Sure, ConvertMemoryPageAttributes() can handle this. Will remove the "mXdSupported" check there in next version patch.
Thanks,
Dun
-----Original Message-----
From: Ni, Ray <ray.ni@intel.com>
Sent: Friday, June 2, 2023 11:55 AM
To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: RE: [edk2-devel] [Patch V4 14/15] UefiCpuPkg: Refinement to smm runtime InitPaging() code
> + } else {
> + MemoryAttrMask = EFI_MEMORY_XP;
> + for (Index = 0; Index < mSmmCpuSmramRangeCount; Index++) {
> + Base = mSmmCpuSmramRanges[Index].CpuStart;
> + if ((Base > PreviousAddress) && mXdSupported) {
Is "mXdSupported" check really needed? But you didn't add that check for the last remaining range.
ConvertMemoryPageAttributes() can handle the case when XD is not supported by CPU.
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
2023-06-02 3:46 ` duntan
@ 2023-06-02 5:08 ` Ni, Ray
2023-06-02 7:33 ` duntan
0 siblings, 1 reply; 44+ messages in thread
From: Ni, Ray @ 2023-06-02 5:08 UTC (permalink / raw)
To: Tan, Dun, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
I see.
The GuardPage in normal stack is marked as not-present inside GenSmmPageTable.
The GuardPage in shadow stack is marked as not-present after calling InitializeMpServiceData().
Do you think it would be clearer to group them together?
Thanks,
Ray
> -----Original Message-----
> From: Tan, Dun <dun.tan@intel.com>
> Sent: Friday, June 2, 2023 11:47 AM
> To: Ni, Ray <ray.ni@intel.com>; devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable()
> to create smm page table
>
> Edited the reply to make it clearer.
>
> -----Original Message-----
> From: Tan, Dun
> Sent: Friday, June 2, 2023 11:36 AM
> To: Ni, Ray <ray.ni@intel.com>; devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable()
> to create smm page table
>
> GenSmmPageTable() doesn't mark the "Guard page" in "mSmmShadowStackSize
> range" is to align with old behavior.
> GenSmmPageTable() is also used to create SmmS3Cr3 and the "Guard page" in
> "mSmmShadowStackSize range" is not marked as non-present in SmmS3Cr3.
> In the code logic, the "Guard page" in "mSmmShadowStackSize range" is marked
> as not-present after InitializeMpServiceData() creates the initial smm page table.
> This process is only done for smm runtime page table.
>
> Thanks,
> Dun
> -----Original Message-----
> From: Ni, Ray <ray.ni@intel.com>
> Sent: Friday, June 2, 2023 11:23 AM
> To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
> Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable()
> to create smm page table
>
>
> //
> // SMM Stack Guard Enabled
> // Append Shadow Stack after normal stack
> // 2 more pages is allocated for each processor, one is guard page and the
> other is known good shadow stack.
> //
> // |= Stacks
> // +--------------------------------------------------+--------------------------------------
> -------------------------+
> // | Known Good Stack | Guard Page | SMM Stack | Known Good Shadow
> Stack | Guard Page | SMM Shadow Stack |
> // +--------------------------------------------------+--------------------------------------
> -------------------------+
> // | 4K | 4K |PcdCpuSmmStackSize| 4K | 4K
> |PcdCpuSmmShadowStackSize|
> // |<---------------- mSmmStackSize ----------------->|<---------------------
> mSmmShadowStackSize ------------------->|
> // | |
> // |<-------------------------------------------- Processor N ----------------------------
> --------------------------->|
> //
>
> GenSmmPageTable() only sets the "Guard page" in "mSmmStackSize range" as
> not-present.
> But the "Guard page" in "mSmmShadowStackSize range" is not marked as not-
> present.
> Why?
>
> Thanks,
> Ray
>
> > -----Original Message-----
> > From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of duntan
> > Sent: Tuesday, May 16, 2023 5:59 PM
> > To: devel@edk2.groups.io
> > Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>;
> > Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann
> > <kraxel@redhat.com>
> > Subject: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add
> > GenSmmPageTable() to create smm page table
> >
> > This commit is code refinement to current smm pagetable generation
> > code. Add a new GenSmmPageTable() API to create smm page table based
> > on the PageTableMap() API in CpuPageTableLib. Caller only needs to
> > specify the paging mode and the PhysicalAddressBits to map.
> > This function can be used to create both IA32 pae paging and X64
> > 5level, 4level paging.
> >
> > Signed-off-by: Dun Tan <dun.tan@intel.com>
> > Cc: Eric Dong <eric.dong@intel.com>
> > Cc: Ray Ni <ray.ni@intel.com>
> > Cc: Rahul Kumar <rahul1.kumar@intel.com>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > ---
> > UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 2 +-
> > UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 15
> > +++++++++++++++
> > UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 65
> >
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 220
> > ++++++++++++++++++++++++++--------------------------------------------
> > ++++++++++++++++++++++++++---------------
> > ----------------------------------------------------------------------
> > ----------------------------
> > -------------------------------------
> > 4 files changed, 107 insertions(+), 195 deletions(-)
> >
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> > index 9c8107080a..b11264ce4a 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> > @@ -63,7 +63,7 @@ SmmInitPageTable (
> > InitializeIDTSmmStackGuard ();
> > }
> >
> > - return Gen4GPageTable (TRUE);
> > + return GenSmmPageTable (PagingPae, mPhysicalAddressBits);
> > }
> >
> > /**
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> > index a7da9673a5..5399659bc0 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> > @@ -553,6 +553,21 @@ Gen4GPageTable (
> > IN BOOLEAN Is32BitPageTable
> > );
> >
> > +/**
> > + Create page table based on input PagingMode and PhysicalAddressBits in
> smm.
> > +
> > + @param[in] PagingMode The paging mode.
> > + @param[in] PhysicalAddressBits The bits of physical address to map.
> > +
> > + @retval PageTable Address
> > +
> > +**/
> > +UINTN
> > +GenSmmPageTable (
> > + IN PAGING_MODE PagingMode,
> > + IN UINT8 PhysicalAddressBits
> > + );
> > +
> > /**
> > Initialize global data for MP synchronization.
> >
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> > index ef0ba9a355..138ff43c9d 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> > @@ -1642,6 +1642,71 @@ EdkiiSmmClearMemoryAttributes (
> > return SmmClearMemoryAttributes (BaseAddress, Length, Attributes);
> > }
> >
> > +/**
> > + Create page table based on input PagingMode and PhysicalAddressBits in
> smm.
> > +
> > + @param[in] PagingMode The paging mode.
> > + @param[in] PhysicalAddressBits The bits of physical address to map.
> > +
> > + @retval PageTable Address
> > +
> > +**/
> > +UINTN
> > +GenSmmPageTable (
> > + IN PAGING_MODE PagingMode,
> > + IN UINT8 PhysicalAddressBits
> > + )
> > +{
> > + UINTN PageTableBufferSize;
> > + UINTN PageTable;
> > + VOID *PageTableBuffer;
> > + IA32_MAP_ATTRIBUTE MapAttribute;
> > + IA32_MAP_ATTRIBUTE MapMask;
> > + RETURN_STATUS Status;
> > + UINTN GuardPage;
> > + UINTN Index;
> > + UINT64 Length;
> > +
> > + Length = LShiftU64 (1, PhysicalAddressBits);
> > + PageTable = 0;
> > + PageTableBufferSize = 0;
> > + MapMask.Uint64 = MAX_UINT64;
> > + MapAttribute.Uint64 = mAddressEncMask;
> > + MapAttribute.Bits.Present = 1;
> > + MapAttribute.Bits.ReadWrite = 1;
> > + MapAttribute.Bits.UserSupervisor = 1;
> > + MapAttribute.Bits.Accessed = 1;
> > + MapAttribute.Bits.Dirty = 1;
> > +
> > + Status = PageTableMap (&PageTable, PagingMode, NULL,
> > &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> > + ASSERT (Status == RETURN_BUFFER_TOO_SMALL); DEBUG ((DEBUG_INFO,
> > + "GenSMMPageTable: 0x%x bytes needed for initial
> > SMM page table\n", PageTableBufferSize));
> > + PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES
> > (PageTableBufferSize));
> > + ASSERT (PageTableBuffer != NULL);
> > + Status = PageTableMap (&PageTable, PagingMode, PageTableBuffer,
> > &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> > + ASSERT (Status == RETURN_SUCCESS);
> > + ASSERT (PageTableBufferSize == 0);
> > +
> > + if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
> > + //
> > + // Mark the 4KB guard page between known good stack and smm stack
> > + as
> > non-present
> > + //
> > + for (Index = 0; Index < gSmmCpuPrivate-
> > >SmmCoreEntryContext.NumberOfCpus; Index++) {
> > + GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE + Index *
> > (mSmmStackSize + mSmmShadowStackSize);
> > + Status = ConvertMemoryPageAttributes (PageTable, PagingMode,
> > GuardPage, SIZE_4KB, EFI_MEMORY_RP, TRUE, NULL);
> > + }
> > + }
> > +
> > + if ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) {
> > + //
> > + // Mark [0, 4k] as non-present
> > + //
> > + Status = ConvertMemoryPageAttributes (PageTable, PagingMode, 0,
> > + SIZE_4KB,
> > EFI_MEMORY_RP, TRUE, NULL);
> > + }
> > +
> > + return (UINTN)PageTable;
> > +}
> > +
> > /**
> > This function retrieves the attributes of the memory region specified by
> > BaseAddress and Length. If different attributes are got from
> > different part diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > index 25ced50955..060e6dc147 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > @@ -167,160 +167,6 @@ CalculateMaximumSupportAddress (
> > return PhysicalAddressBits;
> > }
> >
> > -/**
> > - Set static page table.
> > -
> > - @param[in] PageTable Address of page table.
> > - @param[in] PhysicalAddressBits The maximum physical address bits
> > supported.
> > -**/
> > -VOID
> > -SetStaticPageTable (
> > - IN UINTN PageTable,
> > - IN UINT8 PhysicalAddressBits
> > - )
> > -{
> > - UINT64 PageAddress;
> > - UINTN NumberOfPml5EntriesNeeded;
> > - UINTN NumberOfPml4EntriesNeeded;
> > - UINTN NumberOfPdpEntriesNeeded;
> > - UINTN IndexOfPml5Entries;
> > - UINTN IndexOfPml4Entries;
> > - UINTN IndexOfPdpEntries;
> > - UINTN IndexOfPageDirectoryEntries;
> > - UINT64 *PageMapLevel5Entry;
> > - UINT64 *PageMapLevel4Entry;
> > - UINT64 *PageMap;
> > - UINT64 *PageDirectoryPointerEntry;
> > - UINT64 *PageDirectory1GEntry;
> > - UINT64 *PageDirectoryEntry;
> > -
> > - //
> > - // IA-32e paging translates 48-bit linear addresses to 52-bit
> > physical addresses
> > - // when 5-Level Paging is disabled.
> > - //
> > - ASSERT (PhysicalAddressBits <= 52);
> > - if (!m5LevelPagingNeeded && (PhysicalAddressBits > 48)) {
> > - PhysicalAddressBits = 48;
> > - }
> > -
> > - NumberOfPml5EntriesNeeded = 1;
> > - if (PhysicalAddressBits > 48) {
> > - NumberOfPml5EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> > 48);
> > - PhysicalAddressBits = 48;
> > - }
> > -
> > - NumberOfPml4EntriesNeeded = 1;
> > - if (PhysicalAddressBits > 39) {
> > - NumberOfPml4EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> > 39);
> > - PhysicalAddressBits = 39;
> > - }
> > -
> > - NumberOfPdpEntriesNeeded = 1;
> > - ASSERT (PhysicalAddressBits > 30);
> > - NumberOfPdpEntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits
> > - 30);
> > -
> > - //
> > - // By architecture only one PageMapLevel4 exists - so lets allocate
> > storage for it.
> > - //
> > - PageMap = (VOID *)PageTable;
> > -
> > - PageMapLevel4Entry = PageMap;
> > - PageMapLevel5Entry = NULL;
> > - if (m5LevelPagingNeeded) {
> > - //
> > - // By architecture only one PageMapLevel5 exists - so lets allocate storage
> for
> > it.
> > - //
> > - PageMapLevel5Entry = PageMap;
> > - }
> > -
> > - PageAddress = 0;
> > -
> > - for ( IndexOfPml5Entries = 0
> > - ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded
> > - ; IndexOfPml5Entries++, PageMapLevel5Entry++)
> > - {
> > - //
> > - // Each PML5 entry points to a page of PML4 entires.
> > - // So lets allocate space for them and fill them in in the IndexOfPml4Entries
> > loop.
> > - // When 5-Level Paging is disabled, below allocation happens only once.
> > - //
> > - if (m5LevelPagingNeeded) {
> > - PageMapLevel4Entry = (UINT64 *)((*PageMapLevel5Entry) &
> > ~mAddressEncMask & gPhyMask);
> > - if (PageMapLevel4Entry == NULL) {
> > - PageMapLevel4Entry = AllocatePageTableMemory (1);
> > - ASSERT (PageMapLevel4Entry != NULL);
> > - ZeroMem (PageMapLevel4Entry, EFI_PAGES_TO_SIZE (1));
> > -
> > - *PageMapLevel5Entry = (UINT64)(UINTN)PageMapLevel4Entry |
> > mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> > - }
> > - }
> > -
> > - for (IndexOfPml4Entries = 0; IndexOfPml4Entries <
> > (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512);
> > IndexOfPml4Entries++, PageMapLevel4Entry++) {
> > - //
> > - // Each PML4 entry points to a page of Page Directory Pointer entries.
> > - //
> > - PageDirectoryPointerEntry = (UINT64 *)((*PageMapLevel4Entry) &
> > ~mAddressEncMask & gPhyMask);
> > - if (PageDirectoryPointerEntry == NULL) {
> > - PageDirectoryPointerEntry = AllocatePageTableMemory (1);
> > - ASSERT (PageDirectoryPointerEntry != NULL);
> > - ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE (1));
> > -
> > - *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry |
> > mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> > - }
> > -
> > - if (m1GPageTableSupport) {
> > - PageDirectory1GEntry = PageDirectoryPointerEntry;
> > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512;
> > IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress +=
> > SIZE_1GB) {
> > - if ((IndexOfPml4Entries == 0) && (IndexOfPageDirectoryEntries < 4)) {
> > - //
> > - // Skip the < 4G entries
> > - //
> > - continue;
> > - }
> > -
> > - //
> > - // Fill in the Page Directory entries
> > - //
> > - *PageDirectory1GEntry = PageAddress | mAddressEncMask |
> IA32_PG_PS
> > | PAGE_ATTRIBUTE_BITS;
> > - }
> > - } else {
> > - PageAddress = BASE_4GB;
> > - for (IndexOfPdpEntries = 0; IndexOfPdpEntries <
> > (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512);
> > IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
> > - if ((IndexOfPml4Entries == 0) && (IndexOfPdpEntries < 4)) {
> > - //
> > - // Skip the < 4G entries
> > - //
> > - continue;
> > - }
> > -
> > - //
> > - // Each Directory Pointer entries points to a page of Page Directory
> entires.
> > - // So allocate space for them and fill them in in the
> > IndexOfPageDirectoryEntries loop.
> > - //
> > - PageDirectoryEntry = (UINT64 *)((*PageDirectoryPointerEntry) &
> > ~mAddressEncMask & gPhyMask);
> > - if (PageDirectoryEntry == NULL) {
> > - PageDirectoryEntry = AllocatePageTableMemory (1);
> > - ASSERT (PageDirectoryEntry != NULL);
> > - ZeroMem (PageDirectoryEntry, EFI_PAGES_TO_SIZE (1));
> > -
> > - //
> > - // Fill in a Page Directory Pointer Entries
> > - //
> > - *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry |
> > mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> > - }
> > -
> > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries <
> 512;
> > IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress +=
> > SIZE_2MB) {
> > - //
> > - // Fill in the Page Directory entries
> > - //
> > - *PageDirectoryEntry = PageAddress | mAddressEncMask | IA32_PG_PS
> |
> > PAGE_ATTRIBUTE_BITS;
> > - }
> > - }
> > - }
> > - }
> > - }
> > -}
> > -
> > /**
> > Create PageTable for SMM use.
> >
> > @@ -332,15 +178,16 @@ SmmInitPageTable (
> > VOID
> > )
> > {
> > - EFI_PHYSICAL_ADDRESS Pages;
> > - UINT64 *PTEntry;
> > + UINTN PageTable;
> > LIST_ENTRY *FreePage;
> > UINTN Index;
> > UINTN PageFaultHandlerHookAddress;
> > IA32_IDT_GATE_DESCRIPTOR *IdtEntry;
> > EFI_STATUS Status;
> > + UINT64 *PdptEntry;
> > UINT64 *Pml4Entry;
> > UINT64 *Pml5Entry;
> > + UINT8 PhysicalAddressBits;
> >
> > //
> > // Initialize spin lock
> > @@ -357,59 +204,44 @@ SmmInitPageTable (
> > } else {
> > mPagingMode = m1GPageTableSupport ? Paging4Level1GB : Paging4Level;
> > }
> > +
> > DEBUG ((DEBUG_INFO, "5LevelPaging Needed - %d\n",
> > m5LevelPagingNeeded));
> > DEBUG ((DEBUG_INFO, "1GPageTable Support - %d\n",
> > m1GPageTableSupport));
> > DEBUG ((DEBUG_INFO, "PcdCpuSmmRestrictedMemoryAccess - %d\n",
> > mCpuSmmRestrictedMemoryAccess));
> > DEBUG ((DEBUG_INFO, "PhysicalAddressBits - %d\n",
> > mPhysicalAddressBits));
> > - //
> > - // Generate PAE page table for the first 4GB memory space
> > - //
> > - Pages = Gen4GPageTable (FALSE);
> >
> > //
> > - // Set IA32_PG_PMNT bit to mask this entry
> > + // Generate initial SMM page table.
> > + // Only map [0, 4G] when PcdCpuSmmRestrictedMemoryAccess is FALSE.
> > //
> > - PTEntry = (UINT64 *)(UINTN)Pages;
> > - for (Index = 0; Index < 4; Index++) {
> > - PTEntry[Index] |= IA32_PG_PMNT;
> > - }
> > -
> > - //
> > - // Fill Page-Table-Level4 (PML4) entry
> > - //
> > - Pml4Entry = (UINT64 *)AllocatePageTableMemory (1);
> > - ASSERT (Pml4Entry != NULL);
> > - *Pml4Entry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> > - ZeroMem (Pml4Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml4Entry));
> > -
> > - //
> > - // Set sub-entries number
> > - //
> > - SetSubEntriesNum (Pml4Entry, 3);
> > - PTEntry = Pml4Entry;
> > + PhysicalAddressBits = mCpuSmmRestrictedMemoryAccess ?
> > mPhysicalAddressBits : 32;
> > + PageTable = GenSmmPageTable (mPagingMode, PhysicalAddressBits);
> >
> > if (m5LevelPagingNeeded) {
> > + Pml5Entry = (UINT64 *)PageTable;
> > //
> > - // Fill PML5 entry
> > - //
> > - Pml5Entry = (UINT64 *)AllocatePageTableMemory (1);
> > - ASSERT (Pml5Entry != NULL);
> > - *Pml5Entry = (UINTN)Pml4Entry | mAddressEncMask |
> > PAGE_ATTRIBUTE_BITS;
> > - ZeroMem (Pml5Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml5Entry));
> > - //
> > - // Set sub-entries number
> > + // Set Pml5Entry sub-entries number for smm PF handler usage.
> > //
> > SetSubEntriesNum (Pml5Entry, 1);
> > - PTEntry = Pml5Entry;
> > + Pml4Entry = (UINT64 *)((*Pml5Entry) & ~mAddressEncMask &
> > + gPhyMask); } else {
> > + Pml4Entry = (UINT64 *)PageTable;
> > + }
> > +
> > + //
> > + // Set IA32_PG_PMNT bit to mask first 4 PdptEntry.
> > + //
> > + PdptEntry = (UINT64 *)((*Pml4Entry) & ~mAddressEncMask & gPhyMask);
> > + for (Index = 0; Index < 4; Index++) {
> > + PdptEntry[Index] |= IA32_PG_PMNT;
> > }
> >
> > - if (mCpuSmmRestrictedMemoryAccess) {
> > + if (!mCpuSmmRestrictedMemoryAccess) {
> > //
> > - // When access to non-SMRAM memory is restricted, create page table
> > - // that covers all memory space.
> > + // Set Pml4Entry sub-entries number for smm PF handler usage.
> > //
> > - SetStaticPageTable ((UINTN)PTEntry, mPhysicalAddressBits);
> > - } else {
> > + SetSubEntriesNum (Pml4Entry, 3);
> > +
> > //
> > // Add pages to page pool
> > //
> > @@ -466,7 +298,7 @@ SmmInitPageTable (
> > //
> > // Return the address of PML4/PML5 (to set CR3)
> > //
> > - return (UINT32)(UINTN)PTEntry;
> > + return (UINT32)PageTable;
> > }
> >
> > /**
> > --
> > 2.31.1.windows.1
> >
> >
> >
> >
> >
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
2023-06-02 5:08 ` Ni, Ray
@ 2023-06-02 7:33 ` duntan
0 siblings, 0 replies; 44+ messages in thread
From: duntan @ 2023-06-02 7:33 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io; +Cc: Dong, Eric, Kumar, Rahul R, Gerd Hoffmann
In original code logic, SmmS3 page table set GuardPage in smm normal stack as not-present instead of Smm S3 Stack.
A bugzila has been submitted to track this issue: https://bugzilla.tianocore.org/show_bug.cgi?id=4476 . Will fix the issue in future patches.
So now remain the code status that the GuardPage in normal stack and the GuardPage in shadow stack are protected at different place.
Thanks,
Dun
-----Original Message-----
From: Ni, Ray <ray.ni@intel.com>
Sent: Friday, June 2, 2023 1:09 PM
To: Tan, Dun <dun.tan@intel.com>; devel@edk2.groups.io
Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table
I see.
The GuardPage in normal stack is marked as not-present inside GenSmmPageTable.
The GuardPage in shadow stack is marked as not-present after calling InitializeMpServiceData().
Do you think it would be clearer to group them together?
Thanks,
Ray
> -----Original Message-----
> From: Tan, Dun <dun.tan@intel.com>
> Sent: Friday, June 2, 2023 11:47 AM
> To: Ni, Ray <ray.ni@intel.com>; devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add
> GenSmmPageTable() to create smm page table
>
> Edited the reply to make it clearer.
>
> -----Original Message-----
> From: Tan, Dun
> Sent: Friday, June 2, 2023 11:36 AM
> To: Ni, Ray <ray.ni@intel.com>; devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add
> GenSmmPageTable() to create smm page table
>
> GenSmmPageTable() doesn't mark the "Guard page" in
> "mSmmShadowStackSize range" is to align with old behavior.
> GenSmmPageTable() is also used to create SmmS3Cr3 and the "Guard page"
> in "mSmmShadowStackSize range" is not marked as non-present in SmmS3Cr3.
> In the code logic, the "Guard page" in "mSmmShadowStackSize range" is
> marked as not-present after InitializeMpServiceData() creates the initial smm page table.
> This process is only done for smm runtime page table.
>
> Thanks,
> Dun
> -----Original Message-----
> From: Ni, Ray <ray.ni@intel.com>
> Sent: Friday, June 2, 2023 11:23 AM
> To: devel@edk2.groups.io; Tan, Dun <dun.tan@intel.com>
> Cc: Dong, Eric <eric.dong@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: RE: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add
> GenSmmPageTable() to create smm page table
>
>
> //
> // SMM Stack Guard Enabled
> // Append Shadow Stack after normal stack
> // 2 more pages is allocated for each processor, one is guard page and the
> other is known good shadow stack.
> //
> // |= Stacks
> //
> +--------------------------------------------------+------------------
> --------------------
> -------------------------+
> // | Known Good Stack | Guard Page | SMM Stack | Known Good Shadow
> Stack | Guard Page | SMM Shadow Stack |
> //
> +--------------------------------------------------+------------------
> --------------------
> -------------------------+
> // | 4K | 4K |PcdCpuSmmStackSize| 4K | 4K
> |PcdCpuSmmShadowStackSize|
> // |<---------------- mSmmStackSize
> ----------------->|<---------------------
> mSmmShadowStackSize ------------------->|
> // | |
> // |<-------------------------------------------- Processor N
> ----------------------------
> --------------------------->|
> //
>
> GenSmmPageTable() only sets the "Guard page" in "mSmmStackSize range"
> as not-present.
> But the "Guard page" in "mSmmShadowStackSize range" is not marked as
> not- present.
> Why?
>
> Thanks,
> Ray
>
> > -----Original Message-----
> > From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of
> > duntan
> > Sent: Tuesday, May 16, 2023 5:59 PM
> > To: devel@edk2.groups.io
> > Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>;
> > Kumar, Rahul R <rahul.r.kumar@intel.com>; Gerd Hoffmann
> > <kraxel@redhat.com>
> > Subject: [edk2-devel] [Patch V4 10/15] UefiCpuPkg: Add
> > GenSmmPageTable() to create smm page table
> >
> > This commit is code refinement to current smm pagetable generation
> > code. Add a new GenSmmPageTable() API to create smm page table based
> > on the PageTableMap() API in CpuPageTableLib. Caller only needs to
> > specify the paging mode and the PhysicalAddressBits to map.
> > This function can be used to create both IA32 pae paging and X64
> > 5level, 4level paging.
> >
> > Signed-off-by: Dun Tan <dun.tan@intel.com>
> > Cc: Eric Dong <eric.dong@intel.com>
> > Cc: Ray Ni <ray.ni@intel.com>
> > Cc: Rahul Kumar <rahul1.kumar@intel.com>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > ---
> > UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 2 +-
> > UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 15
> > +++++++++++++++
> > UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 65
> >
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 220
> > ++++++++++++++++++++++++++------------------------------------------
> > ++++++++++++++++++++++++++--
> > ++++++++++++++++++++++++++---------------
> > --------------------------------------------------------------------
> > --
> > ----------------------------
> > -------------------------------------
> > 4 files changed, 107 insertions(+), 195 deletions(-)
> >
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> > index 9c8107080a..b11264ce4a 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> > @@ -63,7 +63,7 @@ SmmInitPageTable (
> > InitializeIDTSmmStackGuard ();
> > }
> >
> > - return Gen4GPageTable (TRUE);
> > + return GenSmmPageTable (PagingPae, mPhysicalAddressBits);
> > }
> >
> > /**
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> > index a7da9673a5..5399659bc0 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> > @@ -553,6 +553,21 @@ Gen4GPageTable (
> > IN BOOLEAN Is32BitPageTable
> > );
> >
> > +/**
> > + Create page table based on input PagingMode and
> > +PhysicalAddressBits in
> smm.
> > +
> > + @param[in] PagingMode The paging mode.
> > + @param[in] PhysicalAddressBits The bits of physical address to map.
> > +
> > + @retval PageTable Address
> > +
> > +**/
> > +UINTN
> > +GenSmmPageTable (
> > + IN PAGING_MODE PagingMode,
> > + IN UINT8 PhysicalAddressBits
> > + );
> > +
> > /**
> > Initialize global data for MP synchronization.
> >
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> > index ef0ba9a355..138ff43c9d 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> > @@ -1642,6 +1642,71 @@ EdkiiSmmClearMemoryAttributes (
> > return SmmClearMemoryAttributes (BaseAddress, Length,
> > Attributes); }
> >
> > +/**
> > + Create page table based on input PagingMode and
> > +PhysicalAddressBits in
> smm.
> > +
> > + @param[in] PagingMode The paging mode.
> > + @param[in] PhysicalAddressBits The bits of physical address to map.
> > +
> > + @retval PageTable Address
> > +
> > +**/
> > +UINTN
> > +GenSmmPageTable (
> > + IN PAGING_MODE PagingMode,
> > + IN UINT8 PhysicalAddressBits
> > + )
> > +{
> > + UINTN PageTableBufferSize;
> > + UINTN PageTable;
> > + VOID *PageTableBuffer;
> > + IA32_MAP_ATTRIBUTE MapAttribute;
> > + IA32_MAP_ATTRIBUTE MapMask;
> > + RETURN_STATUS Status;
> > + UINTN GuardPage;
> > + UINTN Index;
> > + UINT64 Length;
> > +
> > + Length = LShiftU64 (1, PhysicalAddressBits);
> > + PageTable = 0;
> > + PageTableBufferSize = 0;
> > + MapMask.Uint64 = MAX_UINT64;
> > + MapAttribute.Uint64 = mAddressEncMask;
> > + MapAttribute.Bits.Present = 1;
> > + MapAttribute.Bits.ReadWrite = 1;
> > + MapAttribute.Bits.UserSupervisor = 1;
> > + MapAttribute.Bits.Accessed = 1;
> > + MapAttribute.Bits.Dirty = 1;
> > +
> > + Status = PageTableMap (&PageTable, PagingMode, NULL,
> > &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> > + ASSERT (Status == RETURN_BUFFER_TOO_SMALL); DEBUG ((DEBUG_INFO,
> > + "GenSMMPageTable: 0x%x bytes needed for initial
> > SMM page table\n", PageTableBufferSize));
> > + PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES
> > (PageTableBufferSize));
> > + ASSERT (PageTableBuffer != NULL); Status = PageTableMap
> > + (&PageTable, PagingMode, PageTableBuffer,
> > &PageTableBufferSize, 0, Length, &MapAttribute, &MapMask, NULL);
> > + ASSERT (Status == RETURN_SUCCESS); ASSERT (PageTableBufferSize
> > + == 0);
> > +
> > + if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
> > + //
> > + // Mark the 4KB guard page between known good stack and smm
> > + stack as
> > non-present
> > + //
> > + for (Index = 0; Index < gSmmCpuPrivate-
> > >SmmCoreEntryContext.NumberOfCpus; Index++) {
> > + GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE + Index *
> > (mSmmStackSize + mSmmShadowStackSize);
> > + Status = ConvertMemoryPageAttributes (PageTable, PagingMode,
> > GuardPage, SIZE_4KB, EFI_MEMORY_RP, TRUE, NULL);
> > + }
> > + }
> > +
> > + if ((PcdGet8 (PcdNullPointerDetectionPropertyMask) & BIT1) != 0) {
> > + //
> > + // Mark [0, 4k] as non-present
> > + //
> > + Status = ConvertMemoryPageAttributes (PageTable, PagingMode, 0,
> > + SIZE_4KB,
> > EFI_MEMORY_RP, TRUE, NULL);
> > + }
> > +
> > + return (UINTN)PageTable;
> > +}
> > +
> > /**
> > This function retrieves the attributes of the memory region specified by
> > BaseAddress and Length. If different attributes are got from
> > different part diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > index 25ced50955..060e6dc147 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > @@ -167,160 +167,6 @@ CalculateMaximumSupportAddress (
> > return PhysicalAddressBits;
> > }
> >
> > -/**
> > - Set static page table.
> > -
> > - @param[in] PageTable Address of page table.
> > - @param[in] PhysicalAddressBits The maximum physical address bits
> > supported.
> > -**/
> > -VOID
> > -SetStaticPageTable (
> > - IN UINTN PageTable,
> > - IN UINT8 PhysicalAddressBits
> > - )
> > -{
> > - UINT64 PageAddress;
> > - UINTN NumberOfPml5EntriesNeeded;
> > - UINTN NumberOfPml4EntriesNeeded;
> > - UINTN NumberOfPdpEntriesNeeded;
> > - UINTN IndexOfPml5Entries;
> > - UINTN IndexOfPml4Entries;
> > - UINTN IndexOfPdpEntries;
> > - UINTN IndexOfPageDirectoryEntries;
> > - UINT64 *PageMapLevel5Entry;
> > - UINT64 *PageMapLevel4Entry;
> > - UINT64 *PageMap;
> > - UINT64 *PageDirectoryPointerEntry;
> > - UINT64 *PageDirectory1GEntry;
> > - UINT64 *PageDirectoryEntry;
> > -
> > - //
> > - // IA-32e paging translates 48-bit linear addresses to 52-bit
> > physical addresses
> > - // when 5-Level Paging is disabled.
> > - //
> > - ASSERT (PhysicalAddressBits <= 52);
> > - if (!m5LevelPagingNeeded && (PhysicalAddressBits > 48)) {
> > - PhysicalAddressBits = 48;
> > - }
> > -
> > - NumberOfPml5EntriesNeeded = 1;
> > - if (PhysicalAddressBits > 48) {
> > - NumberOfPml5EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> > 48);
> > - PhysicalAddressBits = 48;
> > - }
> > -
> > - NumberOfPml4EntriesNeeded = 1;
> > - if (PhysicalAddressBits > 39) {
> > - NumberOfPml4EntriesNeeded = (UINTN)LShiftU64 (1, PhysicalAddressBits -
> > 39);
> > - PhysicalAddressBits = 39;
> > - }
> > -
> > - NumberOfPdpEntriesNeeded = 1;
> > - ASSERT (PhysicalAddressBits > 30);
> > - NumberOfPdpEntriesNeeded = (UINTN)LShiftU64 (1,
> > PhysicalAddressBits
> > - 30);
> > -
> > - //
> > - // By architecture only one PageMapLevel4 exists - so lets
> > allocate storage for it.
> > - //
> > - PageMap = (VOID *)PageTable;
> > -
> > - PageMapLevel4Entry = PageMap;
> > - PageMapLevel5Entry = NULL;
> > - if (m5LevelPagingNeeded) {
> > - //
> > - // By architecture only one PageMapLevel5 exists - so lets allocate storage
> for
> > it.
> > - //
> > - PageMapLevel5Entry = PageMap;
> > - }
> > -
> > - PageAddress = 0;
> > -
> > - for ( IndexOfPml5Entries = 0
> > - ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded
> > - ; IndexOfPml5Entries++, PageMapLevel5Entry++)
> > - {
> > - //
> > - // Each PML5 entry points to a page of PML4 entires.
> > - // So lets allocate space for them and fill them in in the IndexOfPml4Entries
> > loop.
> > - // When 5-Level Paging is disabled, below allocation happens only once.
> > - //
> > - if (m5LevelPagingNeeded) {
> > - PageMapLevel4Entry = (UINT64 *)((*PageMapLevel5Entry) &
> > ~mAddressEncMask & gPhyMask);
> > - if (PageMapLevel4Entry == NULL) {
> > - PageMapLevel4Entry = AllocatePageTableMemory (1);
> > - ASSERT (PageMapLevel4Entry != NULL);
> > - ZeroMem (PageMapLevel4Entry, EFI_PAGES_TO_SIZE (1));
> > -
> > - *PageMapLevel5Entry = (UINT64)(UINTN)PageMapLevel4Entry |
> > mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> > - }
> > - }
> > -
> > - for (IndexOfPml4Entries = 0; IndexOfPml4Entries <
> > (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512);
> > IndexOfPml4Entries++, PageMapLevel4Entry++) {
> > - //
> > - // Each PML4 entry points to a page of Page Directory Pointer entries.
> > - //
> > - PageDirectoryPointerEntry = (UINT64 *)((*PageMapLevel4Entry) &
> > ~mAddressEncMask & gPhyMask);
> > - if (PageDirectoryPointerEntry == NULL) {
> > - PageDirectoryPointerEntry = AllocatePageTableMemory (1);
> > - ASSERT (PageDirectoryPointerEntry != NULL);
> > - ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE (1));
> > -
> > - *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry |
> > mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> > - }
> > -
> > - if (m1GPageTableSupport) {
> > - PageDirectory1GEntry = PageDirectoryPointerEntry;
> > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512;
> > IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress
> > IndexOfPageDirectoryEntries+++=
> > SIZE_1GB) {
> > - if ((IndexOfPml4Entries == 0) && (IndexOfPageDirectoryEntries < 4)) {
> > - //
> > - // Skip the < 4G entries
> > - //
> > - continue;
> > - }
> > -
> > - //
> > - // Fill in the Page Directory entries
> > - //
> > - *PageDirectory1GEntry = PageAddress | mAddressEncMask |
> IA32_PG_PS
> > | PAGE_ATTRIBUTE_BITS;
> > - }
> > - } else {
> > - PageAddress = BASE_4GB;
> > - for (IndexOfPdpEntries = 0; IndexOfPdpEntries <
> > (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512);
> > IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
> > - if ((IndexOfPml4Entries == 0) && (IndexOfPdpEntries < 4)) {
> > - //
> > - // Skip the < 4G entries
> > - //
> > - continue;
> > - }
> > -
> > - //
> > - // Each Directory Pointer entries points to a page of Page Directory
> entires.
> > - // So allocate space for them and fill them in in the
> > IndexOfPageDirectoryEntries loop.
> > - //
> > - PageDirectoryEntry = (UINT64 *)((*PageDirectoryPointerEntry) &
> > ~mAddressEncMask & gPhyMask);
> > - if (PageDirectoryEntry == NULL) {
> > - PageDirectoryEntry = AllocatePageTableMemory (1);
> > - ASSERT (PageDirectoryEntry != NULL);
> > - ZeroMem (PageDirectoryEntry, EFI_PAGES_TO_SIZE (1));
> > -
> > - //
> > - // Fill in a Page Directory Pointer Entries
> > - //
> > - *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry |
> > mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> > - }
> > -
> > - for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries <
> 512;
> > IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress +=
> > SIZE_2MB) {
> > - //
> > - // Fill in the Page Directory entries
> > - //
> > - *PageDirectoryEntry = PageAddress | mAddressEncMask | IA32_PG_PS
> |
> > PAGE_ATTRIBUTE_BITS;
> > - }
> > - }
> > - }
> > - }
> > - }
> > -}
> > -
> > /**
> > Create PageTable for SMM use.
> >
> > @@ -332,15 +178,16 @@ SmmInitPageTable (
> > VOID
> > )
> > {
> > - EFI_PHYSICAL_ADDRESS Pages;
> > - UINT64 *PTEntry;
> > + UINTN PageTable;
> > LIST_ENTRY *FreePage;
> > UINTN Index;
> > UINTN PageFaultHandlerHookAddress;
> > IA32_IDT_GATE_DESCRIPTOR *IdtEntry;
> > EFI_STATUS Status;
> > + UINT64 *PdptEntry;
> > UINT64 *Pml4Entry;
> > UINT64 *Pml5Entry;
> > + UINT8 PhysicalAddressBits;
> >
> > //
> > // Initialize spin lock
> > @@ -357,59 +204,44 @@ SmmInitPageTable (
> > } else {
> > mPagingMode = m1GPageTableSupport ? Paging4Level1GB : Paging4Level;
> > }
> > +
> > DEBUG ((DEBUG_INFO, "5LevelPaging Needed - %d\n",
> > m5LevelPagingNeeded));
> > DEBUG ((DEBUG_INFO, "1GPageTable Support - %d\n",
> > m1GPageTableSupport));
> > DEBUG ((DEBUG_INFO, "PcdCpuSmmRestrictedMemoryAccess - %d\n",
> > mCpuSmmRestrictedMemoryAccess));
> > DEBUG ((DEBUG_INFO, "PhysicalAddressBits - %d\n",
> > mPhysicalAddressBits));
> > - //
> > - // Generate PAE page table for the first 4GB memory space
> > - //
> > - Pages = Gen4GPageTable (FALSE);
> >
> > //
> > - // Set IA32_PG_PMNT bit to mask this entry
> > + // Generate initial SMM page table.
> > + // Only map [0, 4G] when PcdCpuSmmRestrictedMemoryAccess is FALSE.
> > //
> > - PTEntry = (UINT64 *)(UINTN)Pages;
> > - for (Index = 0; Index < 4; Index++) {
> > - PTEntry[Index] |= IA32_PG_PMNT;
> > - }
> > -
> > - //
> > - // Fill Page-Table-Level4 (PML4) entry
> > - //
> > - Pml4Entry = (UINT64 *)AllocatePageTableMemory (1);
> > - ASSERT (Pml4Entry != NULL);
> > - *Pml4Entry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> > - ZeroMem (Pml4Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml4Entry));
> > -
> > - //
> > - // Set sub-entries number
> > - //
> > - SetSubEntriesNum (Pml4Entry, 3);
> > - PTEntry = Pml4Entry;
> > + PhysicalAddressBits = mCpuSmmRestrictedMemoryAccess ?
> > mPhysicalAddressBits : 32;
> > + PageTable = GenSmmPageTable (mPagingMode, PhysicalAddressBits);
> >
> > if (m5LevelPagingNeeded) {
> > + Pml5Entry = (UINT64 *)PageTable;
> > //
> > - // Fill PML5 entry
> > - //
> > - Pml5Entry = (UINT64 *)AllocatePageTableMemory (1);
> > - ASSERT (Pml5Entry != NULL);
> > - *Pml5Entry = (UINTN)Pml4Entry | mAddressEncMask |
> > PAGE_ATTRIBUTE_BITS;
> > - ZeroMem (Pml5Entry + 1, EFI_PAGE_SIZE - sizeof (*Pml5Entry));
> > - //
> > - // Set sub-entries number
> > + // Set Pml5Entry sub-entries number for smm PF handler usage.
> > //
> > SetSubEntriesNum (Pml5Entry, 1);
> > - PTEntry = Pml5Entry;
> > + Pml4Entry = (UINT64 *)((*Pml5Entry) & ~mAddressEncMask &
> > + gPhyMask); } else {
> > + Pml4Entry = (UINT64 *)PageTable; }
> > +
> > + //
> > + // Set IA32_PG_PMNT bit to mask first 4 PdptEntry.
> > + //
> > + PdptEntry = (UINT64 *)((*Pml4Entry) & ~mAddressEncMask &
> > + gPhyMask); for (Index = 0; Index < 4; Index++) {
> > + PdptEntry[Index] |= IA32_PG_PMNT;
> > }
> >
> > - if (mCpuSmmRestrictedMemoryAccess) {
> > + if (!mCpuSmmRestrictedMemoryAccess) {
> > //
> > - // When access to non-SMRAM memory is restricted, create page table
> > - // that covers all memory space.
> > + // Set Pml4Entry sub-entries number for smm PF handler usage.
> > //
> > - SetStaticPageTable ((UINTN)PTEntry, mPhysicalAddressBits);
> > - } else {
> > + SetSubEntriesNum (Pml4Entry, 3);
> > +
> > //
> > // Add pages to page pool
> > //
> > @@ -466,7 +298,7 @@ SmmInitPageTable (
> > //
> > // Return the address of PML4/PML5 (to set CR3)
> > //
> > - return (UINT32)(UINTN)PTEntry;
> > + return (UINT32)PageTable;
> > }
> >
> > /**
> > --
> > 2.31.1.windows.1
> >
> >
> >
> >
> >
^ permalink raw reply [flat|nested] 44+ messages in thread
end of thread, other threads:[~2023-06-02 7:33 UTC | newest]
Thread overview: 44+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-16 9:59 [Patch V4 00/15] Use CpuPageTableLib to create and update smm page table duntan
2023-05-16 9:59 ` [Patch V4 01/15] OvmfPkg: Add CpuPageTableLib required by PiSmmCpuDxe duntan
2023-05-16 9:59 ` [Patch V4 02/15] UefiPayloadPkg: " duntan
2023-05-16 10:01 ` Guo, Gua
2023-05-16 9:59 ` [Patch V4 03/15] OvmfPkg:Remove code that apply AddressEncMask to non-leaf entry duntan
2023-05-16 9:59 ` [Patch V4 04/15] MdeModulePkg: Remove RO and NX protection when unset guard page duntan
2023-05-16 19:04 ` [edk2-devel] " Kun Qin
2023-05-17 10:16 ` duntan
2023-05-16 9:59 ` [Patch V4 05/15] UefiCpuPkg: Use CpuPageTableLib to convert SMM paging attribute duntan
2023-06-01 1:09 ` Ni, Ray
2023-05-16 9:59 ` [Patch V4 06/15] UefiCpuPkg/PiSmmCpuDxeSmm: Avoid setting non-present range to RO/NX duntan
2023-05-16 9:59 ` [Patch V4 07/15] UefiCpuPkg/PiSmmCpuDxeSmm: Add 2 function to disable/enable CR0.WP duntan
2023-05-20 2:00 ` [edk2-devel] " Kun Qin
2023-05-23 9:14 ` duntan
2023-05-24 18:39 ` Kun Qin
2023-05-25 0:46 ` Ni, Ray
2023-05-26 2:48 ` Kun Qin
2023-06-02 3:09 ` Ni, Ray
2023-05-16 9:59 ` [Patch V4 08/15] UefiCpuPkg/PiSmmCpuDxeSmm: Clear CR0.WP before modify page table duntan
2023-06-02 3:12 ` [edk2-devel] " Ni, Ray
2023-05-16 9:59 ` [Patch V4 09/15] UefiCpuPkg: Extern mSmmShadowStackSize in PiSmmCpuDxeSmm.h duntan
2023-06-02 3:16 ` [edk2-devel] " Ni, Ray
2023-06-02 3:36 ` duntan
2023-05-16 9:59 ` [Patch V4 10/15] UefiCpuPkg: Add GenSmmPageTable() to create smm page table duntan
2023-06-02 3:23 ` [edk2-devel] " Ni, Ray
2023-06-02 3:36 ` duntan
2023-06-02 3:46 ` duntan
2023-06-02 5:08 ` Ni, Ray
2023-06-02 7:33 ` duntan
2023-05-16 9:59 ` [Patch V4 11/15] UefiCpuPkg: Use GenSmmPageTable() to create Smm S3 " duntan
2023-06-02 3:31 ` [edk2-devel] " Ni, Ray
2023-06-02 3:37 ` duntan
2023-05-16 9:59 ` [Patch V4 12/15] UefiCpuPkg: Sort mSmmCpuSmramRanges in FindSmramInfo duntan
2023-06-02 3:33 ` [edk2-devel] " Ni, Ray
2023-06-02 3:43 ` duntan
2023-05-16 9:59 ` [Patch V4 13/15] UefiCpuPkg: Sort mProtectionMemRange when ReadyToLock duntan
2023-06-02 3:34 ` Ni, Ray
2023-06-02 3:35 ` Ni, Ray
2023-06-02 3:55 ` duntan
2023-05-16 9:59 ` [Patch V4 14/15] UefiCpuPkg: Refinement to smm runtime InitPaging() code duntan
2023-06-02 3:54 ` [edk2-devel] " Ni, Ray
2023-06-02 3:59 ` duntan
2023-05-16 9:59 ` [Patch V4 15/15] UefiCpuPkg/PiSmmCpuDxeSmm: Remove unnecessary function duntan
2023-06-02 3:55 ` Ni, Ray
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox