* [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask
@ 2017-02-26 17:43 Leo Duran
2017-02-26 17:43 ` [PATCH v4 1/6] MdeModulePkg: " Leo Duran
` (6 more replies)
0 siblings, 7 replies; 15+ messages in thread
From: Leo Duran @ 2017-02-26 17:43 UTC (permalink / raw)
To: edk2-devel; +Cc: Leo Duran
This new PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
This mask is be applied when creating or modifying page-table entries.
For example, the OvmfPkg would set the PCD when launching SEV-enabled guests.
Changes since v3:
- Break out changes to MdeModulePkg/Core/DxeIplPeim to a separate patch
- Add few cases of applying the mask that were previously missed
- Add PCD support for UefiCpuPkg/PiSmmCpuDxeSmm
Leo Duran (6):
MdeModulePkg: Add PCD PcdPteMemoryEncryptionAddressOrMask
MdeModulePkg/Core/DxeIplPeim: Add support for PCD
PcdPteMemoryEncryptionAddressOrMask
MdeModulePkg/Universal/CapsulePei: Add support for PCD
PcdPteMemoryEncryptionAddressOrMask
UefiCpuPkg/Universal/Acpi/S3Resume2Pei: Add support for PCD
PcdPteMemoryEncryptionAddressOrMask
MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe: Add support for
PCD PcdPteMemoryEncryptionAddressOrMask
UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD
PcdPteMemoryEncryptionAddressOrMask
MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 5 +-
MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c | 12 +++-
MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c | 39 +++++++---
MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h | 5 ++
MdeModulePkg/MdeModulePkg.dec | 8 +++
.../BootScriptExecutorDxe.inf | 2 +
.../Acpi/BootScriptExecutorDxe/ScriptExecute.c | 7 ++
.../Acpi/BootScriptExecutorDxe/ScriptExecute.h | 5 ++
.../Acpi/BootScriptExecutorDxe/X64/SetIdtEntry.c | 15 ++--
MdeModulePkg/Universal/CapsulePei/CapsulePei.inf | 2 +
.../Universal/CapsulePei/Common/CommonHeader.h | 5 ++
MdeModulePkg/Universal/CapsulePei/UefiCapsule.c | 17 +++--
MdeModulePkg/Universal/CapsulePei/X64/X64Entry.c | 24 +++++--
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 6 +-
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 83 +++-------------------
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 14 ++++
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 8 ++-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 2 +
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 14 ++--
UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 16 +++--
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 41 ++++++-----
UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c | 32 +++++----
UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c | 17 +++--
.../Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf | 2 +
24 files changed, 224 insertions(+), 157 deletions(-)
mode change 100644 => 100755 UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
--
2.7.4
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v4 1/6] MdeModulePkg: Add PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask Leo Duran
@ 2017-02-26 17:43 ` Leo Duran
2017-02-27 2:20 ` Zeng, Star
2017-02-26 17:43 ` [PATCH v4 2/6] MdeModulePkg/Core/DxeIplPeim: Add support for " Leo Duran
` (5 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Leo Duran @ 2017-02-26 17:43 UTC (permalink / raw)
To: edk2-devel; +Cc: Leo Duran, Feng Tian, Star Zeng, Laszlo Ersek, Brijesh Singh
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
---
MdeModulePkg/MdeModulePkg.dec | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
index 426634f..f45ca84 100644
--- a/MdeModulePkg/MdeModulePkg.dec
+++ b/MdeModulePkg/MdeModulePkg.dec
@@ -6,6 +6,8 @@
# Copyright (c) 2007 - 2017, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2016, Linaro Ltd. All rights reserved.<BR>
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+#
# This program and the accompanying materials are licensed and made available under
# the terms and conditions of the BSD License that accompanies this distribution.
# The full text of the license may be found at
@@ -1702,6 +1704,12 @@
# @Prompt A list of system FMP ImageTypeId GUIDs
gEfiMdeModulePkgTokenSpaceGuid.PcdSystemFmpCapsuleImageTypeIdGuid|{0x0}|VOID*|0x30001046
+ ## This PCD holds the address mask for page table entries when memory encryption is
+ # enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature.
+ # This mask should be applied when creating 1:1 virtual to physical mapping tables.
+ #
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask|0x0|UINT64|0x30001047
+
[PcdsPatchableInModule]
## Specify memory size with page number for PEI code when
# Loading Module at Fixed Address feature is enabled.
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v4 2/6] MdeModulePkg/Core/DxeIplPeim: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask Leo Duran
2017-02-26 17:43 ` [PATCH v4 1/6] MdeModulePkg: " Leo Duran
@ 2017-02-26 17:43 ` Leo Duran
2017-02-26 17:43 ` [PATCH v4 3/6] MdeModulePkg/Universal/CapsulePei: " Leo Duran
` (4 subsequent siblings)
6 siblings, 0 replies; 15+ messages in thread
From: Leo Duran @ 2017-02-26 17:43 UTC (permalink / raw)
To: edk2-devel; +Cc: Leo Duran, Feng Tian, Star Zeng, Laszlo Ersek, Brijesh Singh
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
The mask is applied when creating page tables.
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
---
MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 5 ++-
MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c | 12 ++++++--
MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c | 39 +++++++++++++++++++-----
MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h | 5 +++
4 files changed, 50 insertions(+), 11 deletions(-)
diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf
index 2bc41be..d62bd9b 100644
--- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf
+++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf
@@ -6,6 +6,8 @@
# needed to run the DXE Foundation.
#
# Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+#
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
# which accompanies this distribution. The full text of the license may be found at
@@ -111,7 +113,8 @@
gEfiMdeModulePkgTokenSpaceGuid.PcdDxeIplSupportUefiDecompress ## CONSUMES
[Pcd.IA32,Pcd.X64]
- gEfiMdeModulePkgTokenSpaceGuid.PcdUse1GPageTable ## SOMETIMES_CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdUse1GPageTable ## SOMETIMES_CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
[Pcd.IA32,Pcd.X64,Pcd.ARM,Pcd.AARCH64]
gEfiMdeModulePkgTokenSpaceGuid.PcdSetNxForStack ## SOMETIMES_CONSUMES
diff --git a/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c b/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c
index 8f6a97a..1957326 100644
--- a/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c
+++ b/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c
@@ -2,6 +2,8 @@
Ia32-specific functionality for DxeLoad.
Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -82,6 +84,12 @@ Create4GPageTablesIa32Pae (
PAGE_TABLE_ENTRY *PageDirectoryEntry;
UINTN TotalPagesNum;
UINTN PageAddress;
+ UINT64 AddressEncMask;
+
+ //
+ // Make sure AddressEncMask is contained to smallest supported address field
+ //
+ AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
PhysicalAddressBits = 32;
@@ -111,7 +119,7 @@ Create4GPageTablesIa32Pae (
//
// Fill in a Page Directory Pointer Entries
//
- PageDirectoryPointerEntry->Uint64 = (UINT64) (UINTN) PageDirectoryEntry;
+ PageDirectoryPointerEntry->Uint64 = (UINT64) (UINTN) PageDirectoryEntry | AddressEncMask;
PageDirectoryPointerEntry->Bits.Present = 1;
for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PhysicalAddress += SIZE_2MB) {
@@ -124,7 +132,7 @@ Create4GPageTablesIa32Pae (
//
// Fill in the Page Directory entries
//
- PageDirectoryEntry->Uint64 = (UINT64) PhysicalAddress;
+ PageDirectoryEntry->Uint64 = (UINT64) PhysicalAddress | AddressEncMask;
PageDirectoryEntry->Bits.ReadWrite = 1;
PageDirectoryEntry->Bits.Present = 1;
PageDirectoryEntry->Bits.MustBe1 = 1;
diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
index 790f6ab..48150be 100644
--- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
+++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
@@ -16,6 +16,8 @@
3) IA-32 Intel(R) Architecture Software Developer's Manual Volume 3:System Programmer's Guide, Intel
Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -29,6 +31,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
#include "DxeIpl.h"
#include "VirtualMemory.h"
+
/**
Enable Execute Disable Bit.
@@ -65,20 +68,27 @@ Split2MPageTo4K (
EFI_PHYSICAL_ADDRESS PhysicalAddress4K;
UINTN IndexOfPageTableEntries;
PAGE_TABLE_4K_ENTRY *PageTableEntry;
+ UINT64 AddressEncMask;
+
+ //
+ // Make sure AddressEncMask is contained to smallest supported address field
+ //
+ AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
PageTableEntry = AllocatePages (1);
ASSERT (PageTableEntry != NULL);
+
//
// Fill in 2M page entry.
//
- *PageEntry2M = (UINT64) (UINTN) PageTableEntry | IA32_PG_P | IA32_PG_RW;
+ *PageEntry2M = (UINT64) (UINTN) PageTableEntry | AddressEncMask | IA32_PG_P | IA32_PG_RW;
PhysicalAddress4K = PhysicalAddress;
for (IndexOfPageTableEntries = 0; IndexOfPageTableEntries < 512; IndexOfPageTableEntries++, PageTableEntry++, PhysicalAddress4K += SIZE_4KB) {
//
// Fill in the Page Table entries
//
- PageTableEntry->Uint64 = (UINT64) PhysicalAddress4K;
+ PageTableEntry->Uint64 = (UINT64) PhysicalAddress4K | AddressEncMask;
PageTableEntry->Bits.ReadWrite = 1;
PageTableEntry->Bits.Present = 1;
if ((PhysicalAddress4K >= StackBase) && (PhysicalAddress4K < StackBase + StackSize)) {
@@ -110,13 +120,20 @@ Split1GPageTo2M (
EFI_PHYSICAL_ADDRESS PhysicalAddress2M;
UINTN IndexOfPageDirectoryEntries;
PAGE_TABLE_ENTRY *PageDirectoryEntry;
+ UINT64 AddressEncMask;
+
+ //
+ // Make sure AddressEncMask is contained to smallest supported address field
+ //
+ AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
PageDirectoryEntry = AllocatePages (1);
ASSERT (PageDirectoryEntry != NULL);
+
//
// Fill in 1G page entry.
//
- *PageEntry1G = (UINT64) (UINTN) PageDirectoryEntry | IA32_PG_P | IA32_PG_RW;
+ *PageEntry1G = (UINT64) (UINTN) PageDirectoryEntry | AddressEncMask | IA32_PG_P | IA32_PG_RW;
PhysicalAddress2M = PhysicalAddress;
for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PhysicalAddress2M += SIZE_2MB) {
@@ -129,7 +146,7 @@ Split1GPageTo2M (
//
// Fill in the Page Directory entries
//
- PageDirectoryEntry->Uint64 = (UINT64) PhysicalAddress2M;
+ PageDirectoryEntry->Uint64 = (UINT64) PhysicalAddress2M | AddressEncMask;
PageDirectoryEntry->Bits.ReadWrite = 1;
PageDirectoryEntry->Bits.Present = 1;
PageDirectoryEntry->Bits.MustBe1 = 1;
@@ -171,6 +188,12 @@ CreateIdentityMappingPageTables (
VOID *Hob;
BOOLEAN Page1GSupport;
PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry;
+ UINT64 AddressEncMask;
+
+ //
+ // Make sure AddressEncMask is contained to smallest supported address field
+ //
+ AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
Page1GSupport = FALSE;
if (PcdGetBool(PcdUse1GPageTable)) {
@@ -248,7 +271,7 @@ CreateIdentityMappingPageTables (
//
// Make a PML4 Entry
//
- PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry;
+ PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask;
PageMapLevel4Entry->Bits.ReadWrite = 1;
PageMapLevel4Entry->Bits.Present = 1;
@@ -262,7 +285,7 @@ CreateIdentityMappingPageTables (
//
// Fill in the Page Directory entries
//
- PageDirectory1GEntry->Uint64 = (UINT64)PageAddress;
+ PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
PageDirectory1GEntry->Bits.ReadWrite = 1;
PageDirectory1GEntry->Bits.Present = 1;
PageDirectory1GEntry->Bits.MustBe1 = 1;
@@ -280,7 +303,7 @@ CreateIdentityMappingPageTables (
//
// Fill in a Page Directory Pointer Entries
//
- PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry;
+ PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask;
PageDirectoryPointerEntry->Bits.ReadWrite = 1;
PageDirectoryPointerEntry->Bits.Present = 1;
@@ -294,7 +317,7 @@ CreateIdentityMappingPageTables (
//
// Fill in the Page Directory entries
//
- PageDirectoryEntry->Uint64 = (UINT64)PageAddress;
+ PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
PageDirectoryEntry->Bits.ReadWrite = 1;
PageDirectoryEntry->Bits.Present = 1;
PageDirectoryEntry->Bits.MustBe1 = 1;
diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h
index 20c31f5..7c9bb49 100644
--- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h
+++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h
@@ -8,6 +8,8 @@
4) AMD64 Architecture Programmer's Manual Volume 2: System Programming
Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -23,6 +25,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
#define SYS_CODE64_SEL 0x38
+
#pragma pack(1)
typedef union {
@@ -148,6 +151,8 @@ typedef union {
#define IA32_PG_P BIT0
#define IA32_PG_RW BIT1
+#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
+
/**
Enable Execute Disable Bit.
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v4 3/6] MdeModulePkg/Universal/CapsulePei: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask Leo Duran
2017-02-26 17:43 ` [PATCH v4 1/6] MdeModulePkg: " Leo Duran
2017-02-26 17:43 ` [PATCH v4 2/6] MdeModulePkg/Core/DxeIplPeim: Add support for " Leo Duran
@ 2017-02-26 17:43 ` Leo Duran
2017-02-26 17:43 ` [PATCH v4 4/6] UefiCpuPkg/Universal/Acpi/S3Resume2Pei: " Leo Duran
` (3 subsequent siblings)
6 siblings, 0 replies; 15+ messages in thread
From: Leo Duran @ 2017-02-26 17:43 UTC (permalink / raw)
To: edk2-devel; +Cc: Leo Duran, Feng Tian, Star Zeng, Laszlo Ersek, Brijesh Singh
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
The mask is applied when 4GB tables are created (UefiCapsule.c), and when
the tables are expanded on-demand by page-faults above 4GB's (X64Entry.c).
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
---
MdeModulePkg/Universal/CapsulePei/CapsulePei.inf | 2 ++
| 5 +++++
MdeModulePkg/Universal/CapsulePei/UefiCapsule.c | 17 +++++++++++----
MdeModulePkg/Universal/CapsulePei/X64/X64Entry.c | 24 +++++++++++++++-------
4 files changed, 37 insertions(+), 11 deletions(-)
diff --git a/MdeModulePkg/Universal/CapsulePei/CapsulePei.inf b/MdeModulePkg/Universal/CapsulePei/CapsulePei.inf
index d2ca0d0..c54bc21 100644
--- a/MdeModulePkg/Universal/CapsulePei/CapsulePei.inf
+++ b/MdeModulePkg/Universal/CapsulePei/CapsulePei.inf
@@ -7,6 +7,7 @@
# buffer overflow, integer overflow.
#
# Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
#
# This program and the accompanying materials
# are licensed and made available under the terms and conditions
@@ -76,6 +77,7 @@
[Pcd.IA32]
gEfiMdeModulePkgTokenSpaceGuid.PcdCapsuleCoalesceFile ## SOMETIMES_CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdUse1GPageTable ## SOMETIMES_CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
[FeaturePcd.IA32]
gEfiMdeModulePkgTokenSpaceGuid.PcdDxeIplSwitchToLongMode ## CONSUMES
--git a/MdeModulePkg/Universal/CapsulePei/Common/CommonHeader.h b/MdeModulePkg/Universal/CapsulePei/Common/CommonHeader.h
index 7298874..cac4442 100644
--- a/MdeModulePkg/Universal/CapsulePei/Common/CommonHeader.h
+++ b/MdeModulePkg/Universal/CapsulePei/Common/CommonHeader.h
@@ -2,6 +2,8 @@
Common header file.
Copyright (c) 2011 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -20,6 +22,8 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
//
#define EXTRA_PAGE_TABLE_PAGES 8
+#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
+
//
// This capsule PEIM puts its private data at the start of the
// coalesced capsule. Here's the structure definition.
@@ -60,6 +64,7 @@ typedef struct {
EFI_PHYSICAL_ADDRESS MemoryBase64Ptr;
EFI_PHYSICAL_ADDRESS MemorySize64Ptr;
BOOLEAN Page1GSupport;
+ UINT64 AddressEncMask;
} SWITCH_32_TO_64_CONTEXT;
typedef struct {
diff --git a/MdeModulePkg/Universal/CapsulePei/UefiCapsule.c b/MdeModulePkg/Universal/CapsulePei/UefiCapsule.c
index 9ac9d22..34b095a 100644
--- a/MdeModulePkg/Universal/CapsulePei/UefiCapsule.c
+++ b/MdeModulePkg/Universal/CapsulePei/UefiCapsule.c
@@ -2,6 +2,7 @@
Capsule update PEIM for UEFI2.0
Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions
@@ -41,6 +42,7 @@ GLOBAL_REMOVE_IF_UNREFERENCED CONST IA32_DESCRIPTOR mGdt = {
(UINTN) mGdtEntries
};
+
/**
The function will check if 1G page is supported.
@@ -145,6 +147,12 @@ Create4GPageTables (
PAGE_TABLE_ENTRY *PageDirectoryEntry;
UINTN BigPageAddress;
PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry;
+ UINT64 AddressEncMask;
+
+ //
+ // Make sure AddressEncMask is contained to smallest supported address field.
+ //
+ AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
//
// Create 4G page table by default,
@@ -187,7 +195,7 @@ Create4GPageTables (
//
// Make a PML4 Entry
//
- PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry;
+ PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask;
PageMapLevel4Entry->Bits.ReadWrite = 1;
PageMapLevel4Entry->Bits.Present = 1;
@@ -198,7 +206,7 @@ Create4GPageTables (
//
// Fill in the Page Directory entries
//
- PageDirectory1GEntry->Uint64 = (UINT64)PageAddress;
+ PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
PageDirectory1GEntry->Bits.ReadWrite = 1;
PageDirectory1GEntry->Bits.Present = 1;
PageDirectory1GEntry->Bits.MustBe1 = 1;
@@ -215,7 +223,7 @@ Create4GPageTables (
//
// Fill in a Page Directory Pointer Entries
//
- PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry;
+ PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask;
PageDirectoryPointerEntry->Bits.ReadWrite = 1;
PageDirectoryPointerEntry->Bits.Present = 1;
@@ -223,7 +231,7 @@ Create4GPageTables (
//
// Fill in the Page Directory entries
//
- PageDirectoryEntry->Uint64 = (UINT64)PageAddress;
+ PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
PageDirectoryEntry->Bits.ReadWrite = 1;
PageDirectoryEntry->Bits.Present = 1;
PageDirectoryEntry->Bits.MustBe1 = 1;
@@ -443,6 +451,7 @@ ModeSwitch (
Context.MemoryBase64Ptr = (EFI_PHYSICAL_ADDRESS)(UINTN)&MemoryBase64;
Context.MemorySize64Ptr = (EFI_PHYSICAL_ADDRESS)(UINTN)&MemorySize64;
Context.Page1GSupport = Page1GSupport;
+ Context.AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
//
// Prepare data for return back
diff --git a/MdeModulePkg/Universal/CapsulePei/X64/X64Entry.c b/MdeModulePkg/Universal/CapsulePei/X64/X64Entry.c
index 5ad95d2..e1871c3 100644
--- a/MdeModulePkg/Universal/CapsulePei/X64/X64Entry.c
+++ b/MdeModulePkg/Universal/CapsulePei/X64/X64Entry.c
@@ -2,6 +2,8 @@
The X64 entrypoint is used to process capsule in long mode.
Copyright (c) 2011 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -29,6 +31,7 @@ typedef struct _PAGE_FAULT_CONTEXT {
UINT64 PhyMask;
UINTN PageFaultBuffer;
UINTN PageFaultIndex;
+ UINT64 AddressEncMask;
//
// Store the uplink information for each page being used.
//
@@ -114,21 +117,25 @@ AcquirePage (
)
{
UINTN Address;
+ UINT64 AddressEncMask;
Address = PageFaultContext->PageFaultBuffer + EFI_PAGES_TO_SIZE (PageFaultContext->PageFaultIndex);
ZeroMem ((VOID *) Address, EFI_PAGES_TO_SIZE (1));
+ AddressEncMask = PageFaultContext->AddressEncMask;
+
//
// Cut the previous uplink if it exists and wasn't overwritten.
//
- if ((PageFaultContext->PageFaultUplink[PageFaultContext->PageFaultIndex] != NULL) && ((*PageFaultContext->PageFaultUplink[PageFaultContext->PageFaultIndex] & PageFaultContext->PhyMask) == Address)) {
+ if ((PageFaultContext->PageFaultUplink[PageFaultContext->PageFaultIndex] != NULL) &&
+ ((*PageFaultContext->PageFaultUplink[PageFaultContext->PageFaultIndex] & ~AddressEncMask & PageFaultContext->PhyMask) == Address)) {
*PageFaultContext->PageFaultUplink[PageFaultContext->PageFaultIndex] = 0;
}
//
// Link & Record the current uplink.
//
- *Uplink = Address | IA32_PG_P | IA32_PG_RW;
+ *Uplink = Address | AddressEncMask | IA32_PG_P | IA32_PG_RW;
PageFaultContext->PageFaultUplink[PageFaultContext->PageFaultIndex] = Uplink;
PageFaultContext->PageFaultIndex = (PageFaultContext->PageFaultIndex + 1) % EXTRA_PAGE_TABLE_PAGES;
@@ -153,6 +160,7 @@ PageFaultHandler (
UINT64 *PageTable;
UINT64 PFAddress;
UINTN PTIndex;
+ UINT64 AddressEncMask;
//
// Get the IDT Descriptor.
@@ -163,6 +171,7 @@ PageFaultHandler (
//
PageFaultContext = (PAGE_FAULT_CONTEXT *) (UINTN) (Idtr.Base - sizeof (PAGE_FAULT_CONTEXT));
PhyMask = PageFaultContext->PhyMask;
+ AddressEncMask = PageFaultContext->AddressEncMask;
PFAddress = AsmReadCr2 ();
DEBUG ((EFI_D_ERROR, "CapsuleX64 - PageFaultHandler: Cr2 - %lx\n", PFAddress));
@@ -179,19 +188,19 @@ PageFaultHandler (
if ((PageTable[PTIndex] & IA32_PG_P) == 0) {
AcquirePage (PageFaultContext, &PageTable[PTIndex]);
}
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PhyMask);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~AddressEncMask & PhyMask);
PTIndex = BitFieldRead64 (PFAddress, 30, 38);
// PDPTE
if (PageFaultContext->Page1GSupport) {
- PageTable[PTIndex] = (PFAddress & ~((1ull << 30) - 1)) | IA32_PG_P | IA32_PG_RW | IA32_PG_PS;
+ PageTable[PTIndex] = ((PFAddress | AddressEncMask) & ~((1ull << 30) - 1)) | IA32_PG_P | IA32_PG_RW | IA32_PG_PS;
} else {
if ((PageTable[PTIndex] & IA32_PG_P) == 0) {
AcquirePage (PageFaultContext, &PageTable[PTIndex]);
}
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PhyMask);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~AddressEncMask & PhyMask);
PTIndex = BitFieldRead64 (PFAddress, 21, 29);
// PD
- PageTable[PTIndex] = (PFAddress & ~((1ull << 21) - 1)) | IA32_PG_P | IA32_PG_RW | IA32_PG_PS;
+ PageTable[PTIndex] = ((PFAddress | AddressEncMask) & ~((1ull << 21) - 1)) | IA32_PG_P | IA32_PG_RW | IA32_PG_PS;
}
return NULL;
@@ -244,6 +253,7 @@ _ModuleEntryPoint (
// Hook page fault handler to handle >4G request.
//
PageFaultIdtTable.PageFaultContext.Page1GSupport = EntrypointContext->Page1GSupport;
+ PageFaultIdtTable.PageFaultContext.AddressEncMask = EntrypointContext->AddressEncMask;
IdtEntry = (IA32_IDT_GATE_DESCRIPTOR *) (X64Idtr.Base + (14 * sizeof (IA32_IDT_GATE_DESCRIPTOR)));
HookPageFaultHandler (IdtEntry, &(PageFaultIdtTable.PageFaultContext));
@@ -298,4 +308,4 @@ _ModuleEntryPoint (
//
ASSERT (FALSE);
return EFI_SUCCESS;
-}
\ No newline at end of file
+}
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v4 4/6] UefiCpuPkg/Universal/Acpi/S3Resume2Pei: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask Leo Duran
` (2 preceding siblings ...)
2017-02-26 17:43 ` [PATCH v4 3/6] MdeModulePkg/Universal/CapsulePei: " Leo Duran
@ 2017-02-26 17:43 ` Leo Duran
2017-02-28 8:12 ` Fan, Jeff
2017-02-26 17:43 ` [PATCH v4 5/6] MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe: " Leo Duran
` (2 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Leo Duran @ 2017-02-26 17:43 UTC (permalink / raw)
To: edk2-devel
Cc: Leo Duran, Jeff Fan, Feng Tian, Star Zeng, Laszlo Ersek,
Brijesh Singh
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
The mask is applied when page tables are created (S3Resume.c).
CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
---
UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c | 17 +++++++++++++----
UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf | 2 ++
2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c b/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c
index d306fba..a9d1042 100644
--- a/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c
+++ b/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c
@@ -5,6 +5,7 @@
control is passed to OS waking up handler.
Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+ Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions
@@ -58,6 +59,8 @@
#define STACK_ALIGN_DOWN(Ptr) \
((UINTN)(Ptr) & ~(UINTN)(CPU_STACK_ALIGNMENT - 1))
+#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
+
#pragma pack(1)
typedef union {
struct {
@@ -614,6 +617,12 @@ RestoreS3PageTables (
VOID *Hob;
BOOLEAN Page1GSupport;
PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry;
+ UINT64 AddressEncMask;
+
+ //
+ // Make sure AddressEncMask is contained to smallest supported address field
+ //
+ AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
//
// NOTE: We have to ASSUME the page table generation format, because we do not know whole page table information.
@@ -696,7 +705,7 @@ RestoreS3PageTables (
//
// Make a PML4 Entry
//
- PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry;
+ PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask;
PageMapLevel4Entry->Bits.ReadWrite = 1;
PageMapLevel4Entry->Bits.Present = 1;
@@ -707,7 +716,7 @@ RestoreS3PageTables (
//
// Fill in the Page Directory entries
//
- PageDirectory1GEntry->Uint64 = (UINT64)PageAddress;
+ PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
PageDirectory1GEntry->Bits.ReadWrite = 1;
PageDirectory1GEntry->Bits.Present = 1;
PageDirectory1GEntry->Bits.MustBe1 = 1;
@@ -724,7 +733,7 @@ RestoreS3PageTables (
//
// Fill in a Page Directory Pointer Entries
//
- PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry;
+ PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask;
PageDirectoryPointerEntry->Bits.ReadWrite = 1;
PageDirectoryPointerEntry->Bits.Present = 1;
@@ -732,7 +741,7 @@ RestoreS3PageTables (
//
// Fill in the Page Directory entries
//
- PageDirectoryEntry->Uint64 = (UINT64)PageAddress;
+ PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
PageDirectoryEntry->Bits.ReadWrite = 1;
PageDirectoryEntry->Bits.Present = 1;
PageDirectoryEntry->Bits.MustBe1 = 1;
diff --git a/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf b/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf
index 73aeca3..d514523 100644
--- a/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf
+++ b/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf
@@ -6,6 +6,7 @@
# control is passed to OS waking up handler.
#
# Copyright (c) 2010 - 2014, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
#
# This program and the accompanying materials are
# licensed and made available under the terms and conditions of the BSD License
@@ -91,6 +92,7 @@
[Pcd]
gEfiMdeModulePkgTokenSpaceGuid.PcdUse1GPageTable ## SOMETIMES_CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
[Depex]
TRUE
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v4 5/6] MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask Leo Duran
` (3 preceding siblings ...)
2017-02-26 17:43 ` [PATCH v4 4/6] UefiCpuPkg/Universal/Acpi/S3Resume2Pei: " Leo Duran
@ 2017-02-26 17:43 ` Leo Duran
2017-02-26 17:43 ` [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: " Leo Duran
2017-03-01 4:56 ` [PATCH v4 0/6] Add " Zeng, Star
6 siblings, 0 replies; 15+ messages in thread
From: Leo Duran @ 2017-02-26 17:43 UTC (permalink / raw)
To: edk2-devel
Cc: Leo Duran, Jeff Fan, Feng Tian, Star Zeng, Laszlo Ersek,
Brijesh Singh
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
This module updates the under-4GB page tables configured by the S3-Resume
code in UefiCpuPkg/Universal/Acpi/S3Resume2Pei. The mask is saved at module
start (ScriptExecute.c), and applied when tables are expanded on-demand by
page-faults above 4GB's (SetIdtEntry.c).
CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
---
.../Acpi/BootScriptExecutorDxe/BootScriptExecutorDxe.inf | 2 ++
.../Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.c | 7 +++++++
.../Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.h | 5 +++++
.../Acpi/BootScriptExecutorDxe/X64/SetIdtEntry.c | 15 +++++++++------
4 files changed, 23 insertions(+), 6 deletions(-)
diff --git a/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/BootScriptExecutorDxe.inf b/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/BootScriptExecutorDxe.inf
index 7cd38cf..29af7f5 100644
--- a/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/BootScriptExecutorDxe.inf
+++ b/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/BootScriptExecutorDxe.inf
@@ -5,6 +5,7 @@
# depends on any PEI or DXE service.
#
# Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
#
# This program and the accompanying materials are
# licensed and made available under the terms and conditions of the BSD License
@@ -85,6 +86,7 @@
gEfiMdeModulePkgTokenSpaceGuid.PcdUse1GPageTable ## SOMETIMES_CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdMemoryProfilePropertyMask ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
[Depex]
gEfiLockBoxProtocolGuid
diff --git a/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.c b/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.c
index f67fbca..22d4349 100644
--- a/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.c
+++ b/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.c
@@ -5,6 +5,7 @@
in the entry point. The functionality is to interpret and restore the S3 boot script
Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
@@ -23,6 +24,7 @@ EFI_GUID mBootScriptExecutorImageGuid = {
};
BOOLEAN mPage1GSupport = FALSE;
+UINT64 mAddressEncMask = 0;
/**
Entry function of Boot script exector. This function will be executed in
@@ -408,6 +410,11 @@ BootScriptExecutorEntryPoint (
}
//
+ // Make sure AddressEncMask is contained to smallest supported address field.
+ //
+ mAddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
+
+ //
// Test if the gEfiCallerIdGuid of this image is already installed. if not, the entry
// point is loaded by DXE code which is the first time loaded. or else, it is already
// be reloaded be itself.This is a work-around
diff --git a/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.h b/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.h
index 772347a..7532756 100644
--- a/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.h
+++ b/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/ScriptExecute.h
@@ -5,6 +5,7 @@
in the entry point. The functionality is to interpret and restore the S3 boot script
Copyright (c) 2006 - 2014, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
@@ -44,6 +45,9 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
#include <Protocol/DxeSmmReadyToLock.h>
#include <IndustryStandard/Acpi.h>
+
+#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
+
/**
a ASM function to transfer control to OS.
@@ -87,5 +91,6 @@ SetIdtEntry (
extern UINT32 AsmFixAddress16;
extern UINT32 AsmJmpAddr32;
extern BOOLEAN mPage1GSupport;
+extern UINT64 mAddressEncMask;
#endif //_BOOT_SCRIPT_EXECUTOR_H_
diff --git a/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/X64/SetIdtEntry.c b/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/X64/SetIdtEntry.c
index 6674560..d433cf1 100644
--- a/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/X64/SetIdtEntry.c
+++ b/MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe/X64/SetIdtEntry.c
@@ -4,6 +4,8 @@
Set a IDT entry for interrupt vector 3 for debug purpose for x64 platform
Copyright (c) 2006 - 2015, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
@@ -200,14 +202,15 @@ AcquirePage (
//
// Cut the previous uplink if it exists and wasn't overwritten.
//
- if ((mPageFaultUplink[mPageFaultIndex] != NULL) && ((*mPageFaultUplink[mPageFaultIndex] & mPhyMask) == Address)) {
+ if ((mPageFaultUplink[mPageFaultIndex] != NULL) &&
+ ((*mPageFaultUplink[mPageFaultIndex] & ~mAddressEncMask & mPhyMask) == Address)) {
*mPageFaultUplink[mPageFaultIndex] = 0;
}
//
// Link & Record the current uplink.
//
- *Uplink = Address | IA32_PG_P | IA32_PG_RW;
+ *Uplink = Address | mAddressEncMask | IA32_PG_P | IA32_PG_RW;
mPageFaultUplink[mPageFaultIndex] = Uplink;
mPageFaultIndex = (mPageFaultIndex + 1) % EXTRA_PAGE_TABLE_PAGES;
@@ -245,19 +248,19 @@ PageFaultHandler (
if ((PageTable[PTIndex] & IA32_PG_P) == 0) {
AcquirePage (&PageTable[PTIndex]);
}
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & mPhyMask);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask & mPhyMask);
PTIndex = BitFieldRead64 (PFAddress, 30, 38);
// PDPTE
if (mPage1GSupport) {
- PageTable[PTIndex] = (PFAddress & ~((1ull << 30) - 1)) | IA32_PG_P | IA32_PG_RW | IA32_PG_PS;
+ PageTable[PTIndex] = ((PFAddress | mAddressEncMask) & ~((1ull << 30) - 1)) | IA32_PG_P | IA32_PG_RW | IA32_PG_PS;
} else {
if ((PageTable[PTIndex] & IA32_PG_P) == 0) {
AcquirePage (&PageTable[PTIndex]);
}
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & mPhyMask);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask & mPhyMask);
PTIndex = BitFieldRead64 (PFAddress, 21, 29);
// PD
- PageTable[PTIndex] = (PFAddress & ~((1ull << 21) - 1)) | IA32_PG_P | IA32_PG_RW | IA32_PG_PS;
+ PageTable[PTIndex] = ((PFAddress | mAddressEncMask) & ~((1ull << 21) - 1)) | IA32_PG_P | IA32_PG_RW | IA32_PG_PS;
}
return TRUE;
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask Leo Duran
` (4 preceding siblings ...)
2017-02-26 17:43 ` [PATCH v4 5/6] MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe: " Leo Duran
@ 2017-02-26 17:43 ` Leo Duran
2017-02-27 7:51 ` Fan, Jeff
2017-02-28 8:12 ` Fan, Jeff
2017-03-01 4:56 ` [PATCH v4 0/6] Add " Zeng, Star
6 siblings, 2 replies; 15+ messages in thread
From: Leo Duran @ 2017-02-26 17:43 UTC (permalink / raw)
To: edk2-devel
Cc: Leo Duran, Jeff Fan, Feng Tian, Star Zeng, Laszlo Ersek,
Brijesh Singh
This PCD holds the address mask for page table entries when memory
encryption is enabled on AMD processors supporting the Secure Encrypted
Virtualization (SEV) feature.
The mask is applied when page tables entriees are created or modified.
CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 6 +-
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 83 +++-------------------
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 14 ++++
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 8 ++-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 2 +
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 14 ++--
UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 16 +++--
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 41 ++++++-----
UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c | 32 +++++----
9 files changed, 91 insertions(+), 125 deletions(-)
mode change 100644 => 100755 UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
index c1f4b7e..119810a 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
@@ -2,6 +2,8 @@
Page table manipulation functions for IA-32 processors
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -204,7 +206,7 @@ SetPageTableAttributes (
PageTableSplitted = (PageTableSplitted || IsSplitted);
for (Index3 = 0; Index3 < 4; Index3++) {
- L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L2PageTable == NULL) {
continue;
}
@@ -217,7 +219,7 @@ SetPageTableAttributes (
// 2M
continue;
}
- L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L1PageTable == NULL) {
continue;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
index c7aa48b..d99ad46 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
@@ -2,6 +2,8 @@
SMM MP service implementation
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -781,7 +783,8 @@ Gen4GPageTable (
// Set Page Directory Pointers
//
for (Index = 0; Index < 4; Index++) {
- Pte[Index] = (UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1) + (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS : PAGE_ATTRIBUTE_BITS);
+ Pte[Index] = (UINT64)((UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1)) | mAddressEncMask |
+ (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS : PAGE_ATTRIBUTE_BITS);
}
Pte += EFI_PAGE_SIZE / sizeof (*Pte);
@@ -789,7 +792,7 @@ Gen4GPageTable (
// Fill in Page Directory Entries
//
for (Index = 0; Index < EFI_PAGE_SIZE * 4 / sizeof (*Pte); Index++) {
- Pte[Index] = (Index << 21) | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
+ Pte[Index] = (Index << 21) | mAddressEncMask | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
}
if (FeaturePcdGet (PcdCpuSmmStackGuard)) {
@@ -797,8 +800,8 @@ Gen4GPageTable (
GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE;
Pdpte = (UINT64*)PageTable;
for (PageIndex = Low2MBoundary; PageIndex <= High2MBoundary; PageIndex += SIZE_2MB) {
- Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30, 31)] & ~(EFI_PAGE_SIZE - 1));
- Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages | PAGE_ATTRIBUTE_BITS;
+ Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30, 31)] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
+ Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
//
// Fill in Page Table Entries
//
@@ -809,13 +812,13 @@ Gen4GPageTable (
//
// Mark the guard page as non-present
//
- Pte[Index] = PageAddress;
+ Pte[Index] = PageAddress | mAddressEncMask;
GuardPage += mSmmStackSize;
if (GuardPage > mSmmStackArrayEnd) {
GuardPage = 0;
}
} else {
- Pte[Index] = PageAddress | PAGE_ATTRIBUTE_BITS;
+ Pte[Index] = PageAddress | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
}
PageAddress+= EFI_PAGE_SIZE;
}
@@ -827,74 +830,6 @@ Gen4GPageTable (
}
/**
- Set memory cache ability.
-
- @param PageTable PageTable Address
- @param Address Memory Address to change cache ability
- @param Cacheability Cache ability to set
-
-**/
-VOID
-SetCacheability (
- IN UINT64 *PageTable,
- IN UINTN Address,
- IN UINT8 Cacheability
- )
-{
- UINTN PTIndex;
- VOID *NewPageTableAddress;
- UINT64 *NewPageTable;
- UINTN Index;
-
- ASSERT ((Address & EFI_PAGE_MASK) == 0);
-
- if (sizeof (UINTN) == sizeof (UINT64)) {
- PTIndex = (UINTN)RShiftU64 (Address, 39) & 0x1ff;
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
- }
-
- PTIndex = (UINTN)RShiftU64 (Address, 30) & 0x1ff;
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
-
- //
- // A perfect implementation should check the original cacheability with the
- // one being set, and break a 2M page entry into pieces only when they
- // disagreed.
- //
- PTIndex = (UINTN)RShiftU64 (Address, 21) & 0x1ff;
- if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
- //
- // Allocate a page from SMRAM
- //
- NewPageTableAddress = AllocatePageTableMemory (1);
- ASSERT (NewPageTableAddress != NULL);
-
- NewPageTable = (UINT64 *)NewPageTableAddress;
-
- for (Index = 0; Index < 0x200; Index++) {
- NewPageTable[Index] = PageTable[PTIndex];
- if ((NewPageTable[Index] & IA32_PG_PAT_2M) != 0) {
- NewPageTable[Index] &= ~((UINT64)IA32_PG_PAT_2M);
- NewPageTable[Index] |= (UINT64)IA32_PG_PAT_4K;
- }
- NewPageTable[Index] |= (UINT64)(Index << EFI_PAGE_SHIFT);
- }
-
- PageTable[PTIndex] = ((UINTN)NewPageTableAddress & gPhyMask) | PAGE_ATTRIBUTE_BITS;
- }
-
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
-
- PTIndex = (UINTN)RShiftU64 (Address, 12) & 0x1ff;
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable[PTIndex] &= ~((UINT64)((IA32_PG_PAT_4K | IA32_PG_CD | IA32_PG_WT)));
- PageTable[PTIndex] |= (UINT64)Cacheability;
-}
-
-/**
Schedule a procedure to run on the specified CPU.
@param[in] Procedure The address of the procedure to run
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
old mode 100644
new mode 100755
index fc7714a..d5b8900
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
@@ -2,6 +2,8 @@
Agent Module to load other modules to deploy SMM Entry Vector for X86 CPU.
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -97,6 +99,11 @@ BOOLEAN mSmmReadyToLock = FALSE;
BOOLEAN mSmmCodeAccessCheckEnable = FALSE;
//
+// Global copy of the PcdPteMemoryEncryptionAddressOrMask
+//
+UINT64 mAddressEncMask = 0;
+
+//
// Spin lock used to serialize setting of SMM Code Access Check feature
//
SPIN_LOCK *mConfigSmmCodeAccessCheckLock = NULL;
@@ -605,6 +612,13 @@ PiCpuSmmEntry (
DEBUG ((EFI_D_INFO, "PcdCpuSmmCodeAccessCheckEnable = %d\n", mSmmCodeAccessCheckEnable));
//
+ // Save the PcdPteMemoryEncryptionAddressOrMask value into a global variable.
+ // Make sure AddressEncMask is contained to smallest supported address field.
+ //
+ mAddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
+ DEBUG ((EFI_D_INFO, "mAddressEncMask = 0x%lx\n", mAddressEncMask));
+
+ //
// If support CPU hot plug, we need to allocate resources for possibly hot-added processors
//
if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index 69c54fb..71af2f1 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -2,6 +2,8 @@
Agent Module to load other modules to deploy SMM Entry Vector for X86 CPU.
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -184,7 +186,6 @@ extern EFI_SMM_CPU_PROTOCOL mSmmCpu;
///
extern UINT8 mSmmSaveStateRegisterLma;
-
//
// SMM CPU Protocol function prototypes.
//
@@ -415,6 +416,11 @@ extern SPIN_LOCK *mPFLock;
extern SPIN_LOCK *mConfigSmmCodeAccessCheckLock;
extern SPIN_LOCK *mMemoryMappedLock;
+//
+// Copy of the PcdPteMemoryEncryptionAddressOrMask
+//
+extern UINT64 mAddressEncMask;
+
/**
Create 4G PageTable in SMRAM.
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
index d409edf..099792e 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
@@ -5,6 +5,7 @@
# provides CPU specific services in SMM.
#
# Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
#
# This program and the accompanying materials
# are licensed and made available under the terms and conditions of the BSD License
@@ -157,6 +158,7 @@
gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmSyncMode ## CONSUMES
gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStaticPageTable ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
[Depex]
gEfiMpServiceProtocolGuid
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index 13323d5..a535389 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -119,7 +119,7 @@ GetPageTableEntry (
return NULL;
}
- L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] & PAGING_4K_ADDRESS_MASK_64);
+ L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
} else {
L3PageTable = (UINT64 *)GetPageTableBase ();
}
@@ -133,7 +133,7 @@ GetPageTableEntry (
return &L3PageTable[Index3];
}
- L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L2PageTable[Index2] == 0) {
*PageAttribute = PageNone;
return NULL;
@@ -145,7 +145,7 @@ GetPageTableEntry (
}
// 4k
- L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if ((L1PageTable[Index1] == 0) && (Address != 0)) {
*PageAttribute = PageNone;
return NULL;
@@ -304,9 +304,9 @@ SplitPage (
}
BaseAddress = *PageEntry & PAGING_2M_ADDRESS_MASK_64;
for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
- NewPageEntry[Index] = BaseAddress + SIZE_4KB * Index + ((*PageEntry) & PAGE_PROGATE_BITS);
+ NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) | mAddressEncMask | ((*PageEntry) & PAGE_PROGATE_BITS);
}
- (*PageEntry) = (UINT64)(UINTN)NewPageEntry + PAGE_ATTRIBUTE_BITS;
+ (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
return RETURN_SUCCESS;
} else {
return RETURN_UNSUPPORTED;
@@ -325,9 +325,9 @@ SplitPage (
}
BaseAddress = *PageEntry & PAGING_1G_ADDRESS_MASK_64;
for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
- NewPageEntry[Index] = BaseAddress + SIZE_2MB * Index + IA32_PG_PS + ((*PageEntry) & PAGE_PROGATE_BITS);
+ NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) | mAddressEncMask | IA32_PG_PS | ((*PageEntry) & PAGE_PROGATE_BITS);
}
- (*PageEntry) = (UINT64)(UINTN)NewPageEntry + PAGE_ATTRIBUTE_BITS;
+ (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
return RETURN_SUCCESS;
} else {
return RETURN_UNSUPPORTED;
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
index f53819e..1b84e2c 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
@@ -2,6 +2,8 @@
Enable SMM profile.
Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -513,7 +515,7 @@ InitPaging (
//
continue;
}
- Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
+ Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
} else {
Pde = (UINT64*)(UINTN)mSmmProfileCr3;
}
@@ -530,7 +532,7 @@ InitPaging (
//
continue;
}
- Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
+ Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
if (Pte == 0) {
continue;
}
@@ -557,9 +559,9 @@ InitPaging (
// Split it
for (Level4 = 0; Level4 < SIZE_4KB / sizeof(*Pt); Level4++) {
- Pt[Level4] = Address + ((Level4 << 12) | PAGE_ATTRIBUTE_BITS);
+ Pt[Level4] = Address + ((Level4 << 12) | mAddressEncMask | PAGE_ATTRIBUTE_BITS);
} // end for PT
- *Pte = (UINTN)Pt | PAGE_ATTRIBUTE_BITS;
+ *Pte = (UINT64)(UINTN)Pt | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
} // end if IsAddressSplit
} // end for PTE
} // end for PDE
@@ -577,7 +579,7 @@ InitPaging (
//
continue;
}
- Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
+ Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
} else {
Pde = (UINT64*)(UINTN)mSmmProfileCr3;
}
@@ -597,7 +599,7 @@ InitPaging (
}
continue;
}
- Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
+ Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
if (Pte == 0) {
continue;
}
@@ -624,7 +626,7 @@ InitPaging (
}
} else {
// 4KB page
- Pt = (UINT64 *)(UINTN)(*Pte & PHYSICAL_ADDRESS_MASK);
+ Pt = (UINT64 *)(UINTN)(*Pte & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
if (Pt == 0) {
continue;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
index 17b2f4c..19b19d8 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
@@ -2,6 +2,8 @@
Page Fault (#PF) handler for X64 processors
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -16,6 +18,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
#define PAGE_TABLE_PAGES 8
#define ACC_MAX_BIT BIT3
+
LIST_ENTRY mPagePool = INITIALIZE_LIST_HEAD_VARIABLE (mPagePool);
BOOLEAN m1GPageTableSupport = FALSE;
UINT8 mPhysicalAddressBits;
@@ -168,13 +171,13 @@ SetStaticPageTable (
//
// Each PML4 entry points to a page of Page Directory Pointer entries.
//
- PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) & gPhyMask);
+ PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) & ~mAddressEncMask & gPhyMask);
if (PageDirectoryPointerEntry == NULL) {
PageDirectoryPointerEntry = AllocatePageTableMemory (1);
ASSERT(PageDirectoryPointerEntry != NULL);
ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE(1));
- *PageMapLevel4Entry = ((UINTN)PageDirectoryPointerEntry & gPhyMask) | PAGE_ATTRIBUTE_BITS;
+ *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
}
if (m1GPageTableSupport) {
@@ -189,7 +192,7 @@ SetStaticPageTable (
//
// Fill in the Page Directory entries
//
- *PageDirectory1GEntry = (PageAddress & gPhyMask) | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
+ *PageDirectory1GEntry = PageAddress | mAddressEncMask | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
}
} else {
PageAddress = BASE_4GB;
@@ -204,7 +207,7 @@ SetStaticPageTable (
// Each Directory Pointer entries points to a page of Page Directory entires.
// So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop.
//
- PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) & gPhyMask);
+ PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) & ~mAddressEncMask & gPhyMask);
if (PageDirectoryEntry == NULL) {
PageDirectoryEntry = AllocatePageTableMemory (1);
ASSERT(PageDirectoryEntry != NULL);
@@ -213,14 +216,14 @@ SetStaticPageTable (
//
// Fill in a Page Directory Pointer Entries
//
- *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry | PAGE_ATTRIBUTE_BITS;
+ *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
}
for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) {
//
// Fill in the Page Directory entries
//
- *PageDirectoryEntry = (UINT64)PageAddress | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
+ *PageDirectoryEntry = PageAddress | mAddressEncMask | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
}
}
}
@@ -276,7 +279,7 @@ SmmInitPageTable (
//
PTEntry = (UINT64*)AllocatePageTableMemory (1);
ASSERT (PTEntry != NULL);
- *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
+ *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
//
@@ -457,7 +460,7 @@ ReclaimPages (
//
continue;
}
- Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & gPhyMask);
+ Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & ~mAddressEncMask & gPhyMask);
PML4EIgnore = FALSE;
for (PdptIndex = 0; PdptIndex < EFI_PAGE_SIZE / sizeof (*Pdpt); PdptIndex++) {
if ((Pdpt[PdptIndex] & IA32_PG_P) == 0 || (Pdpt[PdptIndex] & IA32_PG_PMNT) != 0) {
@@ -478,7 +481,7 @@ ReclaimPages (
// we will not check PML4 entry more
//
PML4EIgnore = TRUE;
- Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & gPhyMask);
+ Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & ~mAddressEncMask & gPhyMask);
PDPTEIgnore = FALSE;
for (PdtIndex = 0; PdtIndex < EFI_PAGE_SIZE / sizeof(*Pdt); PdtIndex++) {
if ((Pdt[PdtIndex] & IA32_PG_P) == 0 || (Pdt[PdtIndex] & IA32_PG_PMNT) != 0) {
@@ -560,7 +563,7 @@ ReclaimPages (
//
// Secondly, insert the page pointed by this entry into page pool and clear this entry
//
- InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress & gPhyMask));
+ InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress & ~mAddressEncMask & gPhyMask));
*ReleasePageAddress = 0;
//
@@ -572,14 +575,14 @@ ReclaimPages (
//
// If 4 KByte Page Table is released, check the PDPT entry
//
- Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & gPhyMask);
+ Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & ~mAddressEncMask & gPhyMask);
SubEntriesNum = GetSubEntriesNum(Pdpt + MinPdpt);
if (SubEntriesNum == 0) {
//
// Release the empty Page Directory table if there was no more 4 KByte Page Table entry
// clear the Page directory entry
//
- InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt] & gPhyMask));
+ InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt] & ~mAddressEncMask & gPhyMask));
Pdpt[MinPdpt] = 0;
//
// Go on checking the PML4 table
@@ -603,7 +606,7 @@ ReclaimPages (
// Release the empty PML4 table if there was no more 1G KByte Page Table entry
// clear the Page directory entry
//
- InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4] & gPhyMask));
+ InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4] & ~mAddressEncMask & gPhyMask));
Pml4[MinPml4] = 0;
MinPdpt = (UINTN)-1;
continue;
@@ -747,7 +750,7 @@ SmiDefaultPFHandler (
//
// If the entry is not present, allocate one page from page pool for it
//
- PageTable[PTIndex] = AllocPage () | PAGE_ATTRIBUTE_BITS;
+ PageTable[PTIndex] = AllocPage () | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
} else {
//
// Save the upper entry address
@@ -760,7 +763,7 @@ SmiDefaultPFHandler (
//
PageTable[PTIndex] |= (UINT64)IA32_PG_A;
SetAccNum (PageTable + PTIndex, 7);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask & gPhyMask);
}
PTIndex = BitFieldRead64 (PFAddress, StartBit, StartBit + 8);
@@ -776,7 +779,7 @@ SmiDefaultPFHandler (
//
// Fill the new entry
//
- PageTable[PTIndex] = (PFAddress & gPhyMask & ~((1ull << EndBit) - 1)) |
+ PageTable[PTIndex] = ((PFAddress | mAddressEncMask) & gPhyMask & ~((1ull << EndBit) - 1)) |
PageAttribute | IA32_PG_A | PAGE_ATTRIBUTE_BITS;
if (UpperEntry != NULL) {
SetSubEntriesNum (UpperEntry, GetSubEntriesNum (UpperEntry) + 1);
@@ -927,7 +930,7 @@ SetPageTableAttributes (
PageTableSplitted = (PageTableSplitted || IsSplitted);
for (Index4 = 0; Index4 < SIZE_4KB/sizeof(UINT64); Index4++) {
- L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] & PAGING_4K_ADDRESS_MASK_64);
+ L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L3PageTable == NULL) {
continue;
}
@@ -940,7 +943,7 @@ SetPageTableAttributes (
// 1G
continue;
}
- L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L2PageTable == NULL) {
continue;
}
@@ -953,7 +956,7 @@ SetPageTableAttributes (
// 2M
continue;
}
- L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L1PageTable == NULL) {
continue;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
index cc393dc..37da5fb 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
@@ -2,6 +2,8 @@
X64 processor specific functions to enable SMM profile.
Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -52,7 +54,7 @@ InitSmmS3Cr3 (
//
PTEntry = (UINT64*)AllocatePageTableMemory (1);
ASSERT (PTEntry != NULL);
- *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
+ *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
//
@@ -111,14 +113,14 @@ AcquirePage (
//
// Cut the previous uplink if it exists and wasn't overwritten
//
- if ((mPFPageUplink[mPFPageIndex] != NULL) && ((*mPFPageUplink[mPFPageIndex] & PHYSICAL_ADDRESS_MASK) == Address)) {
+ if ((mPFPageUplink[mPFPageIndex] != NULL) && ((*mPFPageUplink[mPFPageIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK) == Address)) {
*mPFPageUplink[mPFPageIndex] = 0;
}
//
// Link & Record the current uplink
//
- *Uplink = Address | PAGE_ATTRIBUTE_BITS;
+ *Uplink = Address | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
mPFPageUplink[mPFPageIndex] = Uplink;
mPFPageIndex = (mPFPageIndex + 1) % MAX_PF_PAGE_COUNT;
@@ -168,33 +170,33 @@ RestorePageTableAbove4G (
PTIndex = BitFieldRead64 (PFAddress, 39, 47);
if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
// PML4E
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
PTIndex = BitFieldRead64 (PFAddress, 30, 38);
if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
// PDPTE
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
PTIndex = BitFieldRead64 (PFAddress, 21, 29);
// PD
if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
//
// 2MB page
//
- Address = (UINT64)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
- if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)) == ((PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
+ Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
+ if ((Address & ~((1ull << 21) - 1)) == ((PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
Existed = TRUE;
}
} else {
//
// 4KB page
//
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask& PHYSICAL_ADDRESS_MASK);
if (PageTable != 0) {
//
// When there is a valid entry to map to 4KB page, need not create a new entry to map 2MB.
//
PTIndex = BitFieldRead64 (PFAddress, 12, 20);
- Address = (UINT64)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
- if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1)) == (PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
+ Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
+ if ((Address & ~((1ull << 12) - 1)) == (PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
Existed = TRUE;
}
}
@@ -227,13 +229,13 @@ RestorePageTableAbove4G (
PFAddress = AsmReadCr2 ();
// PML4E
PTIndex = BitFieldRead64 (PFAddress, 39, 47);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
// PDPTE
PTIndex = BitFieldRead64 (PFAddress, 30, 38);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
// PD
PTIndex = BitFieldRead64 (PFAddress, 21, 29);
- Address = PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK;
+ Address = PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK;
//
// Check if 2MB-page entry need be changed to 4KB-page entry.
//
@@ -241,9 +243,9 @@ RestorePageTableAbove4G (
AcquirePage (&PageTable[PTIndex]);
// PTE
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
for (Index = 0; Index < 512; Index++) {
- PageTable[Index] = Address | PAGE_ATTRIBUTE_BITS;
+ PageTable[Index] = Address | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
if (!IsAddressValid (Address, &Nx)) {
PageTable[Index] = PageTable[Index] & (INTN)(INT32)(~PAGE_ATTRIBUTE_BITS);
}
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v4 1/6] MdeModulePkg: Add PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 ` [PATCH v4 1/6] MdeModulePkg: " Leo Duran
@ 2017-02-27 2:20 ` Zeng, Star
2017-02-27 14:12 ` Duran, Leo
0 siblings, 1 reply; 15+ messages in thread
From: Zeng, Star @ 2017-02-27 2:20 UTC (permalink / raw)
To: Leo Duran, edk2-devel@ml01.01.org
Cc: Tian, Feng, Laszlo Ersek, Brijesh Singh, Zeng, Star
We saw you defined 4K/2M/1G in previous patch series,
#define PAGING_4K_ADDRESS_MASK_64 0x000FFFFFFFFFF000ull
#define PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull
#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
But only 1G mask is defined and used in this patch series, is that on purpose?
#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
That means PcdPteMemoryEncryptionAddressOrMask will be just valid as 1G aligned, right?
Thanks,
Star
-----Original Message-----
From: Leo Duran [mailto:leo.duran@amd.com]
Sent: Monday, February 27, 2017 1:43 AM
To: edk2-devel@ml01.01.org
Cc: Leo Duran <leo.duran@amd.com>; Tian, Feng <feng.tian@intel.com>; Zeng, Star <star.zeng@intel.com>; Laszlo Ersek <lersek@redhat.com>; Brijesh Singh <brijesh.singh@amd.com>
Subject: [PATCH v4 1/6] MdeModulePkg: Add PCD PcdPteMemoryEncryptionAddressOrMask
This PCD holds the address mask for page table entries when memory encryption is enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature.
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
---
MdeModulePkg/MdeModulePkg.dec | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec index 426634f..f45ca84 100644
--- a/MdeModulePkg/MdeModulePkg.dec
+++ b/MdeModulePkg/MdeModulePkg.dec
@@ -6,6 +6,8 @@
# Copyright (c) 2007 - 2017, Intel Corporation. All rights reserved.<BR> # Copyright (c) 2016, Linaro Ltd. All rights reserved.<BR> # (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> #
# This program and the accompanying materials are licensed and made available under # the terms and conditions of the BSD License that accompanies this distribution.
# The full text of the license may be found at @@ -1702,6 +1704,12 @@
# @Prompt A list of system FMP ImageTypeId GUIDs
gEfiMdeModulePkgTokenSpaceGuid.PcdSystemFmpCapsuleImageTypeIdGuid|{0x0}|VOID*|0x30001046
+ ## This PCD holds the address mask for page table entries when memory
+ encryption is # enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature.
+ # This mask should be applied when creating 1:1 virtual to physical mapping tables.
+ #
+
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask|0x0
+ |UINT64|0x30001047
+
[PcdsPatchableInModule]
## Specify memory size with page number for PEI code when
# Loading Module at Fixed Address feature is enabled.
--
2.7.4
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 ` [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: " Leo Duran
@ 2017-02-27 7:51 ` Fan, Jeff
2017-02-27 14:15 ` Duran, Leo
2017-02-28 8:12 ` Fan, Jeff
1 sibling, 1 reply; 15+ messages in thread
From: Fan, Jeff @ 2017-02-27 7:51 UTC (permalink / raw)
To: Leo Duran, edk2-devel@ml01.01.org
Cc: Tian, Feng, Zeng, Star, Laszlo Ersek, Brijesh Singh
Leo,
I just saw your patch removed SetCacheability() also. I will drop my patch in https://www.mail-archive.com/edk2-devel@lists.01.org/msg22944.html :-)
Thanks!
Jeff
-----Original Message-----
From: Leo Duran [mailto:leo.duran@amd.com]
Sent: Monday, February 27, 2017 1:43 AM
To: edk2-devel@ml01.01.org
Cc: Leo Duran; Fan, Jeff; Tian, Feng; Zeng, Star; Laszlo Ersek; Brijesh Singh
Subject: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
This PCD holds the address mask for page table entries when memory encryption is enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature.
The mask is applied when page tables entriees are created or modified.
CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 6 +-
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 83 +++-------------------
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 14 ++++
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 8 ++-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 2 +
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 14 ++--
UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 16 +++--
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 41 ++++++-----
UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c | 32 +++++----
9 files changed, 91 insertions(+), 125 deletions(-) mode change 100644 => 100755 UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
index c1f4b7e..119810a 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
@@ -2,6 +2,8 @@
Page table manipulation functions for IA-32 processors
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -204,7 +206,7 @@ SetPageTableAttributes (
PageTableSplitted = (PageTableSplitted || IsSplitted);
for (Index3 = 0; Index3 < 4; Index3++) {
- L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L2PageTable == NULL) {
continue;
}
@@ -217,7 +219,7 @@ SetPageTableAttributes (
// 2M
continue;
}
- L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L1PageTable == NULL) {
continue;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
index c7aa48b..d99ad46 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
@@ -2,6 +2,8 @@
SMM MP service implementation
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -781,7 +783,8 @@ Gen4GPageTable (
// Set Page Directory Pointers
//
for (Index = 0; Index < 4; Index++) {
- Pte[Index] = (UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1) + (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS : PAGE_ATTRIBUTE_BITS);
+ Pte[Index] = (UINT64)((UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1)) | mAddressEncMask |
+ (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS :
+ PAGE_ATTRIBUTE_BITS);
}
Pte += EFI_PAGE_SIZE / sizeof (*Pte);
@@ -789,7 +792,7 @@ Gen4GPageTable (
// Fill in Page Directory Entries
//
for (Index = 0; Index < EFI_PAGE_SIZE * 4 / sizeof (*Pte); Index++) {
- Pte[Index] = (Index << 21) | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
+ Pte[Index] = (Index << 21) | mAddressEncMask | IA32_PG_PS |
+ PAGE_ATTRIBUTE_BITS;
}
if (FeaturePcdGet (PcdCpuSmmStackGuard)) { @@ -797,8 +800,8 @@ Gen4GPageTable (
GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE;
Pdpte = (UINT64*)PageTable;
for (PageIndex = Low2MBoundary; PageIndex <= High2MBoundary; PageIndex += SIZE_2MB) {
- Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30, 31)] & ~(EFI_PAGE_SIZE - 1));
- Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages | PAGE_ATTRIBUTE_BITS;
+ Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30, 31)] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
+ Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages |
+ mAddressEncMask | PAGE_ATTRIBUTE_BITS;
//
// Fill in Page Table Entries
//
@@ -809,13 +812,13 @@ Gen4GPageTable (
//
// Mark the guard page as non-present
//
- Pte[Index] = PageAddress;
+ Pte[Index] = PageAddress | mAddressEncMask;
GuardPage += mSmmStackSize;
if (GuardPage > mSmmStackArrayEnd) {
GuardPage = 0;
}
} else {
- Pte[Index] = PageAddress | PAGE_ATTRIBUTE_BITS;
+ Pte[Index] = PageAddress | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
}
PageAddress+= EFI_PAGE_SIZE;
}
@@ -827,74 +830,6 @@ Gen4GPageTable (
}
/**
- Set memory cache ability.
-
- @param PageTable PageTable Address
- @param Address Memory Address to change cache ability
- @param Cacheability Cache ability to set
-
-**/
-VOID
-SetCacheability (
- IN UINT64 *PageTable,
- IN UINTN Address,
- IN UINT8 Cacheability
- )
-{
- UINTN PTIndex;
- VOID *NewPageTableAddress;
- UINT64 *NewPageTable;
- UINTN Index;
-
- ASSERT ((Address & EFI_PAGE_MASK) == 0);
-
- if (sizeof (UINTN) == sizeof (UINT64)) {
- PTIndex = (UINTN)RShiftU64 (Address, 39) & 0x1ff;
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
- }
-
- PTIndex = (UINTN)RShiftU64 (Address, 30) & 0x1ff;
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
-
- //
- // A perfect implementation should check the original cacheability with the
- // one being set, and break a 2M page entry into pieces only when they
- // disagreed.
- //
- PTIndex = (UINTN)RShiftU64 (Address, 21) & 0x1ff;
- if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
- //
- // Allocate a page from SMRAM
- //
- NewPageTableAddress = AllocatePageTableMemory (1);
- ASSERT (NewPageTableAddress != NULL);
-
- NewPageTable = (UINT64 *)NewPageTableAddress;
-
- for (Index = 0; Index < 0x200; Index++) {
- NewPageTable[Index] = PageTable[PTIndex];
- if ((NewPageTable[Index] & IA32_PG_PAT_2M) != 0) {
- NewPageTable[Index] &= ~((UINT64)IA32_PG_PAT_2M);
- NewPageTable[Index] |= (UINT64)IA32_PG_PAT_4K;
- }
- NewPageTable[Index] |= (UINT64)(Index << EFI_PAGE_SHIFT);
- }
-
- PageTable[PTIndex] = ((UINTN)NewPageTableAddress & gPhyMask) | PAGE_ATTRIBUTE_BITS;
- }
-
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
-
- PTIndex = (UINTN)RShiftU64 (Address, 12) & 0x1ff;
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable[PTIndex] &= ~((UINT64)((IA32_PG_PAT_4K | IA32_PG_CD | IA32_PG_WT)));
- PageTable[PTIndex] |= (UINT64)Cacheability; -}
-
-/**
Schedule a procedure to run on the specified CPU.
@param[in] Procedure The address of the procedure to run
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
old mode 100644
new mode 100755
index fc7714a..d5b8900
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
@@ -2,6 +2,8 @@
Agent Module to load other modules to deploy SMM Entry Vector for X86 CPU.
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -97,6 +99,11 @@ BOOLEAN mSmmReadyToLock = FALSE;
BOOLEAN mSmmCodeAccessCheckEnable = FALSE;
//
+// Global copy of the PcdPteMemoryEncryptionAddressOrMask
+//
+UINT64 mAddressEncMask = 0;
+
+//
// Spin lock used to serialize setting of SMM Code Access Check feature //
SPIN_LOCK *mConfigSmmCodeAccessCheckLock = NULL;
@@ -605,6 +612,13 @@ PiCpuSmmEntry (
DEBUG ((EFI_D_INFO, "PcdCpuSmmCodeAccessCheckEnable = %d\n", mSmmCodeAccessCheckEnable));
//
+ // Save the PcdPteMemoryEncryptionAddressOrMask value into a global variable.
+ // Make sure AddressEncMask is contained to smallest supported address field.
+ //
+ mAddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) &
+ PAGING_1G_ADDRESS_MASK_64; DEBUG ((EFI_D_INFO, "mAddressEncMask =
+ 0x%lx\n", mAddressEncMask));
+
+ //
// If support CPU hot plug, we need to allocate resources for possibly hot-added processors
//
if (FeaturePcdGet (PcdCpuHotPlugSupport)) { diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index 69c54fb..71af2f1 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -2,6 +2,8 @@
Agent Module to load other modules to deploy SMM Entry Vector for X86 CPU.
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at
@@ -184,7 +186,6 @@ extern EFI_SMM_CPU_PROTOCOL mSmmCpu;
///
extern UINT8 mSmmSaveStateRegisterLma;
-
//
// SMM CPU Protocol function prototypes.
//
@@ -415,6 +416,11 @@ extern SPIN_LOCK *mPFLock;
extern SPIN_LOCK *mConfigSmmCodeAccessCheckLock;
extern SPIN_LOCK *mMemoryMappedLock;
+//
+// Copy of the PcdPteMemoryEncryptionAddressOrMask
+//
+extern UINT64 mAddressEncMask;
+
/**
Create 4G PageTable in SMRAM.
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
index d409edf..099792e 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
@@ -5,6 +5,7 @@
# provides CPU specific services in SMM.
#
# Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
#
# This program and the accompanying materials # are licensed and made available under the terms and conditions of the BSD License @@ -157,6 +158,7 @@
gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmSyncMode ## CONSUMES
gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStaticPageTable ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
[Depex]
gEfiMpServiceProtocolGuid
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index 13323d5..a535389 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -119,7 +119,7 @@ GetPageTableEntry (
return NULL;
}
- L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] & PAGING_4K_ADDRESS_MASK_64);
+ L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
} else {
L3PageTable = (UINT64 *)GetPageTableBase ();
}
@@ -133,7 +133,7 @@ GetPageTableEntry (
return &L3PageTable[Index3];
}
- L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L2PageTable[Index2] == 0) {
*PageAttribute = PageNone;
return NULL;
@@ -145,7 +145,7 @@ GetPageTableEntry (
}
// 4k
- L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if ((L1PageTable[Index1] == 0) && (Address != 0)) {
*PageAttribute = PageNone;
return NULL;
@@ -304,9 +304,9 @@ SplitPage (
}
BaseAddress = *PageEntry & PAGING_2M_ADDRESS_MASK_64;
for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
- NewPageEntry[Index] = BaseAddress + SIZE_4KB * Index + ((*PageEntry) & PAGE_PROGATE_BITS);
+ NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) |
+ mAddressEncMask | ((*PageEntry) & PAGE_PROGATE_BITS);
}
- (*PageEntry) = (UINT64)(UINTN)NewPageEntry + PAGE_ATTRIBUTE_BITS;
+ (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
return RETURN_SUCCESS;
} else {
return RETURN_UNSUPPORTED;
@@ -325,9 +325,9 @@ SplitPage (
}
BaseAddress = *PageEntry & PAGING_1G_ADDRESS_MASK_64;
for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
- NewPageEntry[Index] = BaseAddress + SIZE_2MB * Index + IA32_PG_PS + ((*PageEntry) & PAGE_PROGATE_BITS);
+ NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) |
+ mAddressEncMask | IA32_PG_PS | ((*PageEntry) & PAGE_PROGATE_BITS);
}
- (*PageEntry) = (UINT64)(UINTN)NewPageEntry + PAGE_ATTRIBUTE_BITS;
+ (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
return RETURN_SUCCESS;
} else {
return RETURN_UNSUPPORTED;
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
index f53819e..1b84e2c 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
@@ -2,6 +2,8 @@
Enable SMM profile.
Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -513,7 +515,7 @@ InitPaging (
//
continue;
}
- Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
+ Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
} else {
Pde = (UINT64*)(UINTN)mSmmProfileCr3;
}
@@ -530,7 +532,7 @@ InitPaging (
//
continue;
}
- Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
+ Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
if (Pte == 0) {
continue;
}
@@ -557,9 +559,9 @@ InitPaging (
// Split it
for (Level4 = 0; Level4 < SIZE_4KB / sizeof(*Pt); Level4++) {
- Pt[Level4] = Address + ((Level4 << 12) | PAGE_ATTRIBUTE_BITS);
+ Pt[Level4] = Address + ((Level4 << 12) | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS);
} // end for PT
- *Pte = (UINTN)Pt | PAGE_ATTRIBUTE_BITS;
+ *Pte = (UINT64)(UINTN)Pt | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
} // end if IsAddressSplit
} // end for PTE
} // end for PDE
@@ -577,7 +579,7 @@ InitPaging (
//
continue;
}
- Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
+ Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
} else {
Pde = (UINT64*)(UINTN)mSmmProfileCr3;
}
@@ -597,7 +599,7 @@ InitPaging (
}
continue;
}
- Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
+ Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
if (Pte == 0) {
continue;
}
@@ -624,7 +626,7 @@ InitPaging (
}
} else {
// 4KB page
- Pt = (UINT64 *)(UINTN)(*Pte & PHYSICAL_ADDRESS_MASK);
+ Pt = (UINT64 *)(UINTN)(*Pte & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
if (Pt == 0) {
continue;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
index 17b2f4c..19b19d8 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
@@ -2,6 +2,8 @@
Page Fault (#PF) handler for X64 processors
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -16,6 +18,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
#define PAGE_TABLE_PAGES 8
#define ACC_MAX_BIT BIT3
+
LIST_ENTRY mPagePool = INITIALIZE_LIST_HEAD_VARIABLE (mPagePool);
BOOLEAN m1GPageTableSupport = FALSE;
UINT8 mPhysicalAddressBits;
@@ -168,13 +171,13 @@ SetStaticPageTable (
//
// Each PML4 entry points to a page of Page Directory Pointer entries.
//
- PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) & gPhyMask);
+ PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) &
+ ~mAddressEncMask & gPhyMask);
if (PageDirectoryPointerEntry == NULL) {
PageDirectoryPointerEntry = AllocatePageTableMemory (1);
ASSERT(PageDirectoryPointerEntry != NULL);
ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE(1));
- *PageMapLevel4Entry = ((UINTN)PageDirectoryPointerEntry & gPhyMask) | PAGE_ATTRIBUTE_BITS;
+ *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry |
+ mAddressEncMask | PAGE_ATTRIBUTE_BITS;
}
if (m1GPageTableSupport) {
@@ -189,7 +192,7 @@ SetStaticPageTable (
//
// Fill in the Page Directory entries
//
- *PageDirectory1GEntry = (PageAddress & gPhyMask) | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
+ *PageDirectory1GEntry = PageAddress | mAddressEncMask |
+ IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
}
} else {
PageAddress = BASE_4GB;
@@ -204,7 +207,7 @@ SetStaticPageTable (
// Each Directory Pointer entries points to a page of Page Directory entires.
// So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop.
//
- PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) & gPhyMask);
+ PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) &
+ ~mAddressEncMask & gPhyMask);
if (PageDirectoryEntry == NULL) {
PageDirectoryEntry = AllocatePageTableMemory (1);
ASSERT(PageDirectoryEntry != NULL); @@ -213,14 +216,14 @@ SetStaticPageTable (
//
// Fill in a Page Directory Pointer Entries
//
- *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry | PAGE_ATTRIBUTE_BITS;
+ *PageDirectoryPointerEntry =
+ (UINT64)(UINTN)PageDirectoryEntry | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
}
for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) {
//
// Fill in the Page Directory entries
//
- *PageDirectoryEntry = (UINT64)PageAddress | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
+ *PageDirectoryEntry = PageAddress | mAddressEncMask |
+ IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
}
}
}
@@ -276,7 +279,7 @@ SmmInitPageTable (
//
PTEntry = (UINT64*)AllocatePageTableMemory (1);
ASSERT (PTEntry != NULL);
- *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
+ *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
//
@@ -457,7 +460,7 @@ ReclaimPages (
//
continue;
}
- Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & gPhyMask);
+ Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & ~mAddressEncMask &
+ gPhyMask);
PML4EIgnore = FALSE;
for (PdptIndex = 0; PdptIndex < EFI_PAGE_SIZE / sizeof (*Pdpt); PdptIndex++) {
if ((Pdpt[PdptIndex] & IA32_PG_P) == 0 || (Pdpt[PdptIndex] & IA32_PG_PMNT) != 0) { @@ -478,7 +481,7 @@ ReclaimPages (
// we will not check PML4 entry more
//
PML4EIgnore = TRUE;
- Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & gPhyMask);
+ Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & ~mAddressEncMask &
+ gPhyMask);
PDPTEIgnore = FALSE;
for (PdtIndex = 0; PdtIndex < EFI_PAGE_SIZE / sizeof(*Pdt); PdtIndex++) {
if ((Pdt[PdtIndex] & IA32_PG_P) == 0 || (Pdt[PdtIndex] & IA32_PG_PMNT) != 0) { @@ -560,7 +563,7 @@ ReclaimPages (
//
// Secondly, insert the page pointed by this entry into page pool and clear this entry
//
- InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress & gPhyMask));
+ InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress
+ & ~mAddressEncMask & gPhyMask));
*ReleasePageAddress = 0;
//
@@ -572,14 +575,14 @@ ReclaimPages (
//
// If 4 KByte Page Table is released, check the PDPT entry
//
- Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & gPhyMask);
+ Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & ~mAddressEncMask &
+ gPhyMask);
SubEntriesNum = GetSubEntriesNum(Pdpt + MinPdpt);
if (SubEntriesNum == 0) {
//
// Release the empty Page Directory table if there was no more 4 KByte Page Table entry
// clear the Page directory entry
//
- InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt] & gPhyMask));
+ InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt]
+ & ~mAddressEncMask & gPhyMask));
Pdpt[MinPdpt] = 0;
//
// Go on checking the PML4 table @@ -603,7 +606,7 @@ ReclaimPages (
// Release the empty PML4 table if there was no more 1G KByte Page Table entry
// clear the Page directory entry
//
- InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4] & gPhyMask));
+ InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4]
+ & ~mAddressEncMask & gPhyMask));
Pml4[MinPml4] = 0;
MinPdpt = (UINTN)-1;
continue;
@@ -747,7 +750,7 @@ SmiDefaultPFHandler (
//
// If the entry is not present, allocate one page from page pool for it
//
- PageTable[PTIndex] = AllocPage () | PAGE_ATTRIBUTE_BITS;
+ PageTable[PTIndex] = AllocPage () | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
} else {
//
// Save the upper entry address @@ -760,7 +763,7 @@ SmiDefaultPFHandler (
//
PageTable[PTIndex] |= (UINT64)IA32_PG_A;
SetAccNum (PageTable + PTIndex, 7);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
+ ~mAddressEncMask & gPhyMask);
}
PTIndex = BitFieldRead64 (PFAddress, StartBit, StartBit + 8); @@ -776,7 +779,7 @@ SmiDefaultPFHandler (
//
// Fill the new entry
//
- PageTable[PTIndex] = (PFAddress & gPhyMask & ~((1ull << EndBit) - 1)) |
+ PageTable[PTIndex] = ((PFAddress | mAddressEncMask) & gPhyMask &
+ ~((1ull << EndBit) - 1)) |
PageAttribute | IA32_PG_A | PAGE_ATTRIBUTE_BITS;
if (UpperEntry != NULL) {
SetSubEntriesNum (UpperEntry, GetSubEntriesNum (UpperEntry) + 1); @@ -927,7 +930,7 @@ SetPageTableAttributes (
PageTableSplitted = (PageTableSplitted || IsSplitted);
for (Index4 = 0; Index4 < SIZE_4KB/sizeof(UINT64); Index4++) {
- L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] & PAGING_4K_ADDRESS_MASK_64);
+ L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L3PageTable == NULL) {
continue;
}
@@ -940,7 +943,7 @@ SetPageTableAttributes (
// 1G
continue;
}
- L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L2PageTable == NULL) {
continue;
}
@@ -953,7 +956,7 @@ SetPageTableAttributes (
// 2M
continue;
}
- L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L1PageTable == NULL) {
continue;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
index cc393dc..37da5fb 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
@@ -2,6 +2,8 @@
X64 processor specific functions to enable SMM profile.
Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -52,7 +54,7 @@ InitSmmS3Cr3 (
//
PTEntry = (UINT64*)AllocatePageTableMemory (1);
ASSERT (PTEntry != NULL);
- *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
+ *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
//
@@ -111,14 +113,14 @@ AcquirePage (
//
// Cut the previous uplink if it exists and wasn't overwritten
//
- if ((mPFPageUplink[mPFPageIndex] != NULL) && ((*mPFPageUplink[mPFPageIndex] & PHYSICAL_ADDRESS_MASK) == Address)) {
+ if ((mPFPageUplink[mPFPageIndex] != NULL) &&
+ ((*mPFPageUplink[mPFPageIndex] & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK) == Address)) {
*mPFPageUplink[mPFPageIndex] = 0;
}
//
// Link & Record the current uplink
//
- *Uplink = Address | PAGE_ATTRIBUTE_BITS;
+ *Uplink = Address | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
mPFPageUplink[mPFPageIndex] = Uplink;
mPFPageIndex = (mPFPageIndex + 1) % MAX_PF_PAGE_COUNT; @@ -168,33 +170,33 @@ RestorePageTableAbove4G (
PTIndex = BitFieldRead64 (PFAddress, 39, 47);
if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
// PML4E
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask
+ & PHYSICAL_ADDRESS_MASK);
PTIndex = BitFieldRead64 (PFAddress, 30, 38);
if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
// PDPTE
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
+ ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
PTIndex = BitFieldRead64 (PFAddress, 21, 29);
// PD
if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
//
// 2MB page
//
- Address = (UINT64)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
- if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)) == ((PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
+ Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
+ if ((Address & ~((1ull << 21) - 1)) == ((PFAddress &
+ PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
Existed = TRUE;
}
} else {
//
// 4KB page
//
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
+ ~mAddressEncMask& PHYSICAL_ADDRESS_MASK);
if (PageTable != 0) {
//
// When there is a valid entry to map to 4KB page, need not create a new entry to map 2MB.
//
PTIndex = BitFieldRead64 (PFAddress, 12, 20);
- Address = (UINT64)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
- if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1)) == (PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
+ Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
+ if ((Address & ~((1ull << 12) - 1)) == (PFAddress &
+ PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
Existed = TRUE;
}
}
@@ -227,13 +229,13 @@ RestorePageTableAbove4G (
PFAddress = AsmReadCr2 ();
// PML4E
PTIndex = BitFieldRead64 (PFAddress, 39, 47);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask
+ & PHYSICAL_ADDRESS_MASK);
// PDPTE
PTIndex = BitFieldRead64 (PFAddress, 30, 38);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask
+ & PHYSICAL_ADDRESS_MASK);
// PD
PTIndex = BitFieldRead64 (PFAddress, 21, 29);
- Address = PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK;
+ Address = PageTable[PTIndex] & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK;
//
// Check if 2MB-page entry need be changed to 4KB-page entry.
//
@@ -241,9 +243,9 @@ RestorePageTableAbove4G (
AcquirePage (&PageTable[PTIndex]);
// PTE
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
+ ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
for (Index = 0; Index < 512; Index++) {
- PageTable[Index] = Address | PAGE_ATTRIBUTE_BITS;
+ PageTable[Index] = Address | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
if (!IsAddressValid (Address, &Nx)) {
PageTable[Index] = PageTable[Index] & (INTN)(INT32)(~PAGE_ATTRIBUTE_BITS);
}
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v4 1/6] MdeModulePkg: Add PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-27 2:20 ` Zeng, Star
@ 2017-02-27 14:12 ` Duran, Leo
2017-02-28 0:59 ` Zeng, Star
0 siblings, 1 reply; 15+ messages in thread
From: Duran, Leo @ 2017-02-27 14:12 UTC (permalink / raw)
To: Zeng, Star, edk2-devel@ml01.01.org
Cc: Tian, Feng, Laszlo Ersek, Singh, Brijesh
Please see below.
> -----Original Message-----
> From: Zeng, Star [mailto:star.zeng@intel.com]
> Sent: Sunday, February 26, 2017 8:20 PM
> To: Duran, Leo <leo.duran@amd.com>; edk2-devel@ml01.01.org
> Cc: Tian, Feng <feng.tian@intel.com>; Laszlo Ersek <lersek@redhat.com>;
> Singh, Brijesh <brijesh.singh@amd.com>; Zeng, Star <star.zeng@intel.com>
> Subject: RE: [PATCH v4 1/6] MdeModulePkg: Add PCD
> PcdPteMemoryEncryptionAddressOrMask
>
> We saw you defined 4K/2M/1G in previous patch series, #define
> PAGING_4K_ADDRESS_MASK_64 0x000FFFFFFFFFF000ull #define
> PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull #define
> PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull But only 1G mask
> is defined and used in this patch series, is that on purpose?
> #define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
>
> That means PcdPteMemoryEncryptionAddressOrMask will be just valid as 1G
> aligned, right?
>
> Thanks,
> Star
[Duran, Leo] Correct... The mask *must* allow for 1G pages, so I've simplified the logic.
> -----Original Message-----
> From: Leo Duran [mailto:leo.duran@amd.com]
> Sent: Monday, February 27, 2017 1:43 AM
> To: edk2-devel@ml01.01.org
> Cc: Leo Duran <leo.duran@amd.com>; Tian, Feng <feng.tian@intel.com>;
> Zeng, Star <star.zeng@intel.com>; Laszlo Ersek <lersek@redhat.com>;
> Brijesh Singh <brijesh.singh@amd.com>
> Subject: [PATCH v4 1/6] MdeModulePkg: Add PCD
> PcdPteMemoryEncryptionAddressOrMask
>
> This PCD holds the address mask for page table entries when memory
> encryption is enabled on AMD processors supporting the Secure Encrypted
> Virtualization (SEV) feature.
>
> Cc: Feng Tian <feng.tian@intel.com>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Leo Duran <leo.duran@amd.com>
> Reviewed-by: Star Zeng <star.zeng@intel.com>
> ---
> MdeModulePkg/MdeModulePkg.dec | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/MdeModulePkg/MdeModulePkg.dec
> b/MdeModulePkg/MdeModulePkg.dec index 426634f..f45ca84 100644
> --- a/MdeModulePkg/MdeModulePkg.dec
> +++ b/MdeModulePkg/MdeModulePkg.dec
> @@ -6,6 +6,8 @@
> # Copyright (c) 2007 - 2017, Intel Corporation. All rights reserved.<BR> #
> Copyright (c) 2016, Linaro Ltd. All rights reserved.<BR> # (C) Copyright 2016
> Hewlett Packard Enterprise Development LP<BR>
> +# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> #
> # This program and the accompanying materials are licensed and made
> available under # the terms and conditions of the BSD License that
> accompanies this distribution.
> # The full text of the license may be found at @@ -1702,6 +1704,12 @@
> # @Prompt A list of system FMP ImageTypeId GUIDs
>
> gEfiMdeModulePkgTokenSpaceGuid.PcdSystemFmpCapsuleImageTypeIdGu
> id|{0x0}|VOID*|0x30001046
>
> + ## This PCD holds the address mask for page table entries when memory
> + encryption is # enabled on AMD processors supporting the Secure
> Encrypted Virtualization (SEV) feature.
> + # This mask should be applied when creating 1:1 virtual to physical
> mapping tables.
> + #
> +
> +
> gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM
> ask|0x0
> + |UINT64|0x30001047
> +
> [PcdsPatchableInModule]
> ## Specify memory size with page number for PEI code when
> # Loading Module at Fixed Address feature is enabled.
> --
> 2.7.4
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-27 7:51 ` Fan, Jeff
@ 2017-02-27 14:15 ` Duran, Leo
0 siblings, 0 replies; 15+ messages in thread
From: Duran, Leo @ 2017-02-27 14:15 UTC (permalink / raw)
To: Fan, Jeff, edk2-devel@ml01.01.org
Cc: Tian, Feng, Zeng, Star, Laszlo Ersek, Singh, Brijesh
Excellent, thanks.
Leo
> -----Original Message-----
> From: Fan, Jeff [mailto:jeff.fan@intel.com]
> Sent: Monday, February 27, 2017 1:51 AM
> To: Duran, Leo <leo.duran@amd.com>; edk2-devel@ml01.01.org
> Cc: Tian, Feng <feng.tian@intel.com>; Zeng, Star <star.zeng@intel.com>;
> Laszlo Ersek <lersek@redhat.com>; Singh, Brijesh <brijesh.singh@amd.com>
> Subject: RE: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support
> for PCD PcdPteMemoryEncryptionAddressOrMask
>
> Leo,
>
> I just saw your patch removed SetCacheability() also. I will drop my patch in
> https://www.mail-archive.com/edk2-devel@lists.01.org/msg22944.html :-)
>
> Thanks!
> Jeff
>
> -----Original Message-----
> From: Leo Duran [mailto:leo.duran@amd.com]
> Sent: Monday, February 27, 2017 1:43 AM
> To: edk2-devel@ml01.01.org
> Cc: Leo Duran; Fan, Jeff; Tian, Feng; Zeng, Star; Laszlo Ersek; Brijesh Singh
> Subject: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support for
> PCD PcdPteMemoryEncryptionAddressOrMask
>
> This PCD holds the address mask for page table entries when memory
> encryption is enabled on AMD processors supporting the Secure Encrypted
> Virtualization (SEV) feature.
>
> The mask is applied when page tables entriees are created or modified.
>
> CC: Jeff Fan <jeff.fan@intel.com>
> Cc: Feng Tian <feng.tian@intel.com>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Leo Duran <leo.duran@amd.com>
> Reviewed-by: Star Zeng <star.zeng@intel.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 6 +-
> UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 83 +++------------------
> -
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 14 ++++
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 8 ++-
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 2 +
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 14 ++--
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 16 +++--
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 41 ++++++-----
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c | 32 +++++----
> 9 files changed, 91 insertions(+), 125 deletions(-) mode change 100644 =>
> 100755 UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> index c1f4b7e..119810a 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> @@ -2,6 +2,8 @@
> Page table manipulation functions for IA-32 processors
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -204,7 +206,7 @@ SetPageTableAttributes (
> PageTableSplitted = (PageTableSplitted || IsSplitted);
>
> for (Index3 = 0; Index3 < 4; Index3++) {
> - L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> PAGING_4K_ADDRESS_MASK_64);
> + L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L2PageTable == NULL) {
> continue;
> }
> @@ -217,7 +219,7 @@ SetPageTableAttributes (
> // 2M
> continue;
> }
> - L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> PAGING_4K_ADDRESS_MASK_64);
> + L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L1PageTable == NULL) {
> continue;
> }
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> index c7aa48b..d99ad46 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> @@ -2,6 +2,8 @@
> SMM MP service implementation
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -781,7 +783,8 @@ Gen4GPageTable (
> // Set Page Directory Pointers
> //
> for (Index = 0; Index < 4; Index++) {
> - Pte[Index] = (UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1) +
> (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS :
> PAGE_ATTRIBUTE_BITS);
> + Pte[Index] = (UINT64)((UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1))
> | mAddressEncMask |
> + (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS :
> + PAGE_ATTRIBUTE_BITS);
> }
> Pte += EFI_PAGE_SIZE / sizeof (*Pte);
>
> @@ -789,7 +792,7 @@ Gen4GPageTable (
> // Fill in Page Directory Entries
> //
> for (Index = 0; Index < EFI_PAGE_SIZE * 4 / sizeof (*Pte); Index++) {
> - Pte[Index] = (Index << 21) | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
> + Pte[Index] = (Index << 21) | mAddressEncMask | IA32_PG_PS |
> + PAGE_ATTRIBUTE_BITS;
> }
>
> if (FeaturePcdGet (PcdCpuSmmStackGuard)) { @@ -797,8 +800,8 @@
> Gen4GPageTable (
> GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE;
> Pdpte = (UINT64*)PageTable;
> for (PageIndex = Low2MBoundary; PageIndex <= High2MBoundary;
> PageIndex += SIZE_2MB) {
> - Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30,
> 31)] & ~(EFI_PAGE_SIZE - 1));
> - Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages |
> PAGE_ATTRIBUTE_BITS;
> + Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30,
> 31)] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
> + Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages |
> + mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> //
> // Fill in Page Table Entries
> //
> @@ -809,13 +812,13 @@ Gen4GPageTable (
> //
> // Mark the guard page as non-present
> //
> - Pte[Index] = PageAddress;
> + Pte[Index] = PageAddress | mAddressEncMask;
> GuardPage += mSmmStackSize;
> if (GuardPage > mSmmStackArrayEnd) {
> GuardPage = 0;
> }
> } else {
> - Pte[Index] = PageAddress | PAGE_ATTRIBUTE_BITS;
> + Pte[Index] = PageAddress | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> }
> PageAddress+= EFI_PAGE_SIZE;
> }
> @@ -827,74 +830,6 @@ Gen4GPageTable (
> }
>
> /**
> - Set memory cache ability.
> -
> - @param PageTable PageTable Address
> - @param Address Memory Address to change cache ability
> - @param Cacheability Cache ability to set
> -
> -**/
> -VOID
> -SetCacheability (
> - IN UINT64 *PageTable,
> - IN UINTN Address,
> - IN UINT8 Cacheability
> - )
> -{
> - UINTN PTIndex;
> - VOID *NewPageTableAddress;
> - UINT64 *NewPageTable;
> - UINTN Index;
> -
> - ASSERT ((Address & EFI_PAGE_MASK) == 0);
> -
> - if (sizeof (UINTN) == sizeof (UINT64)) {
> - PTIndex = (UINTN)RShiftU64 (Address, 39) & 0x1ff;
> - ASSERT (PageTable[PTIndex] & IA32_PG_P);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
> - }
> -
> - PTIndex = (UINTN)RShiftU64 (Address, 30) & 0x1ff;
> - ASSERT (PageTable[PTIndex] & IA32_PG_P);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
> -
> - //
> - // A perfect implementation should check the original cacheability with the
> - // one being set, and break a 2M page entry into pieces only when they
> - // disagreed.
> - //
> - PTIndex = (UINTN)RShiftU64 (Address, 21) & 0x1ff;
> - if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
> - //
> - // Allocate a page from SMRAM
> - //
> - NewPageTableAddress = AllocatePageTableMemory (1);
> - ASSERT (NewPageTableAddress != NULL);
> -
> - NewPageTable = (UINT64 *)NewPageTableAddress;
> -
> - for (Index = 0; Index < 0x200; Index++) {
> - NewPageTable[Index] = PageTable[PTIndex];
> - if ((NewPageTable[Index] & IA32_PG_PAT_2M) != 0) {
> - NewPageTable[Index] &= ~((UINT64)IA32_PG_PAT_2M);
> - NewPageTable[Index] |= (UINT64)IA32_PG_PAT_4K;
> - }
> - NewPageTable[Index] |= (UINT64)(Index << EFI_PAGE_SHIFT);
> - }
> -
> - PageTable[PTIndex] = ((UINTN)NewPageTableAddress & gPhyMask) |
> PAGE_ATTRIBUTE_BITS;
> - }
> -
> - ASSERT (PageTable[PTIndex] & IA32_PG_P);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
> -
> - PTIndex = (UINTN)RShiftU64 (Address, 12) & 0x1ff;
> - ASSERT (PageTable[PTIndex] & IA32_PG_P);
> - PageTable[PTIndex] &= ~((UINT64)((IA32_PG_PAT_4K | IA32_PG_CD |
> IA32_PG_WT)));
> - PageTable[PTIndex] |= (UINT64)Cacheability; -}
> -
> -/**
> Schedule a procedure to run on the specified CPU.
>
> @param[in] Procedure The address of the procedure to run
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> old mode 100644
> new mode 100755
> index fc7714a..d5b8900
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> @@ -2,6 +2,8 @@
> Agent Module to load other modules to deploy SMM Entry Vector for X86
> CPU.
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -97,6 +99,11 @@ BOOLEAN mSmmReadyToLock = FALSE;
> BOOLEAN mSmmCodeAccessCheckEnable = FALSE;
>
> //
> +// Global copy of the PcdPteMemoryEncryptionAddressOrMask
> +//
> +UINT64 mAddressEncMask = 0;
> +
> +//
> // Spin lock used to serialize setting of SMM Code Access Check feature //
> SPIN_LOCK *mConfigSmmCodeAccessCheckLock = NULL;
> @@ -605,6 +612,13 @@ PiCpuSmmEntry (
> DEBUG ((EFI_D_INFO, "PcdCpuSmmCodeAccessCheckEnable = %d\n",
> mSmmCodeAccessCheckEnable));
>
> //
> + // Save the PcdPteMemoryEncryptionAddressOrMask value into a global
> variable.
> + // Make sure AddressEncMask is contained to smallest supported address
> field.
> + //
> + mAddressEncMask = PcdGet64
> (PcdPteMemoryEncryptionAddressOrMask) &
> + PAGING_1G_ADDRESS_MASK_64; DEBUG ((EFI_D_INFO,
> "mAddressEncMask =
> + 0x%lx\n", mAddressEncMask));
> +
> + //
> // If support CPU hot plug, we need to allocate resources for possibly hot-
> added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) { diff --git
> a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index 69c54fb..71af2f1 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -2,6 +2,8 @@
> Agent Module to load other modules to deploy SMM Entry Vector for X86
> CPU.
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -184,7 +186,6 @@ extern EFI_SMM_CPU_PROTOCOL mSmmCpu;
> ///
> extern UINT8 mSmmSaveStateRegisterLma;
>
> -
> //
> // SMM CPU Protocol function prototypes.
> //
> @@ -415,6 +416,11 @@ extern SPIN_LOCK *mPFLock;
> extern SPIN_LOCK *mConfigSmmCodeAccessCheckLock;
> extern SPIN_LOCK *mMemoryMappedLock;
>
> +//
> +// Copy of the PcdPteMemoryEncryptionAddressOrMask
> +//
> +extern UINT64 mAddressEncMask;
> +
> /**
> Create 4G PageTable in SMRAM.
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> index d409edf..099792e 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> @@ -5,6 +5,7 @@
> # provides CPU specific services in SMM.
> #
> # Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> #
> # This program and the accompanying materials # are licensed and made
> available under the terms and conditions of the BSD License @@ -157,6
> +158,7 @@
> gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmSyncMode ##
> CONSUMES
> gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStaticPageTable ##
> CONSUMES
> gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable ##
> CONSUMES
> +
> gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM
> ask ## CONSUMES
>
> [Depex]
> gEfiMpServiceProtocolGuid
> diff --git
> a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index 13323d5..a535389 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -119,7 +119,7 @@ GetPageTableEntry (
> return NULL;
> }
>
> - L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> PAGING_4K_ADDRESS_MASK_64);
> + L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> } else {
> L3PageTable = (UINT64 *)GetPageTableBase ();
> }
> @@ -133,7 +133,7 @@ GetPageTableEntry (
> return &L3PageTable[Index3];
> }
>
> - L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> PAGING_4K_ADDRESS_MASK_64);
> + L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L2PageTable[Index2] == 0) {
> *PageAttribute = PageNone;
> return NULL;
> @@ -145,7 +145,7 @@ GetPageTableEntry (
> }
>
> // 4k
> - L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> PAGING_4K_ADDRESS_MASK_64);
> + L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if ((L1PageTable[Index1] == 0) && (Address != 0)) {
> *PageAttribute = PageNone;
> return NULL;
> @@ -304,9 +304,9 @@ SplitPage (
> }
> BaseAddress = *PageEntry & PAGING_2M_ADDRESS_MASK_64;
> for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
> - NewPageEntry[Index] = BaseAddress + SIZE_4KB * Index +
> ((*PageEntry) & PAGE_PROGATE_BITS);
> + NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) |
> + mAddressEncMask | ((*PageEntry) & PAGE_PROGATE_BITS);
> }
> - (*PageEntry) = (UINT64)(UINTN)NewPageEntry +
> PAGE_ATTRIBUTE_BITS;
> + (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> return RETURN_SUCCESS;
> } else {
> return RETURN_UNSUPPORTED;
> @@ -325,9 +325,9 @@ SplitPage (
> }
> BaseAddress = *PageEntry & PAGING_1G_ADDRESS_MASK_64;
> for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
> - NewPageEntry[Index] = BaseAddress + SIZE_2MB * Index +
> IA32_PG_PS + ((*PageEntry) & PAGE_PROGATE_BITS);
> + NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) |
> + mAddressEncMask | IA32_PG_PS | ((*PageEntry) &
> PAGE_PROGATE_BITS);
> }
> - (*PageEntry) = (UINT64)(UINTN)NewPageEntry +
> PAGE_ATTRIBUTE_BITS;
> + (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> return RETURN_SUCCESS;
> } else {
> return RETURN_UNSUPPORTED;
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> index f53819e..1b84e2c 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> @@ -2,6 +2,8 @@
> Enable SMM profile.
>
> Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -513,7 +515,7 @@ InitPaging (
> //
> continue;
> }
> - Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
> + Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> } else {
> Pde = (UINT64*)(UINTN)mSmmProfileCr3;
> }
> @@ -530,7 +532,7 @@ InitPaging (
> //
> continue;
> }
> - Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
> + Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> if (Pte == 0) {
> continue;
> }
> @@ -557,9 +559,9 @@ InitPaging (
>
> // Split it
> for (Level4 = 0; Level4 < SIZE_4KB / sizeof(*Pt); Level4++) {
> - Pt[Level4] = Address + ((Level4 << 12) | PAGE_ATTRIBUTE_BITS);
> + Pt[Level4] = Address + ((Level4 << 12) | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS);
> } // end for PT
> - *Pte = (UINTN)Pt | PAGE_ATTRIBUTE_BITS;
> + *Pte = (UINT64)(UINTN)Pt | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> } // end if IsAddressSplit
> } // end for PTE
> } // end for PDE
> @@ -577,7 +579,7 @@ InitPaging (
> //
> continue;
> }
> - Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
> + Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> } else {
> Pde = (UINT64*)(UINTN)mSmmProfileCr3;
> }
> @@ -597,7 +599,7 @@ InitPaging (
> }
> continue;
> }
> - Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
> + Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> if (Pte == 0) {
> continue;
> }
> @@ -624,7 +626,7 @@ InitPaging (
> }
> } else {
> // 4KB page
> - Pt = (UINT64 *)(UINTN)(*Pte & PHYSICAL_ADDRESS_MASK);
> + Pt = (UINT64 *)(UINTN)(*Pte & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> if (Pt == 0) {
> continue;
> }
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> index 17b2f4c..19b19d8 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> @@ -2,6 +2,8 @@
> Page Fault (#PF) handler for X64 processors
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -16,6 +18,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
>
> #define PAGE_TABLE_PAGES 8
> #define ACC_MAX_BIT BIT3
> +
> LIST_ENTRY mPagePool = INITIALIZE_LIST_HEAD_VARIABLE
> (mPagePool);
> BOOLEAN m1GPageTableSupport = FALSE;
> UINT8 mPhysicalAddressBits;
> @@ -168,13 +171,13 @@ SetStaticPageTable (
> //
> // Each PML4 entry points to a page of Page Directory Pointer entries.
> //
> - PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) &
> gPhyMask);
> + PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) &
> + ~mAddressEncMask & gPhyMask);
> if (PageDirectoryPointerEntry == NULL) {
> PageDirectoryPointerEntry = AllocatePageTableMemory (1);
> ASSERT(PageDirectoryPointerEntry != NULL);
> ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE(1));
>
> - *PageMapLevel4Entry = ((UINTN)PageDirectoryPointerEntry &
> gPhyMask) | PAGE_ATTRIBUTE_BITS;
> + *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry |
> + mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> }
>
> if (m1GPageTableSupport) {
> @@ -189,7 +192,7 @@ SetStaticPageTable (
> //
> // Fill in the Page Directory entries
> //
> - *PageDirectory1GEntry = (PageAddress & gPhyMask) | IA32_PG_PS |
> PAGE_ATTRIBUTE_BITS;
> + *PageDirectory1GEntry = PageAddress | mAddressEncMask |
> + IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
> }
> } else {
> PageAddress = BASE_4GB;
> @@ -204,7 +207,7 @@ SetStaticPageTable (
> // Each Directory Pointer entries points to a page of Page Directory
> entires.
> // So allocate space for them and fill them in in the
> IndexOfPageDirectoryEntries loop.
> //
> - PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) &
> gPhyMask);
> + PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) &
> + ~mAddressEncMask & gPhyMask);
> if (PageDirectoryEntry == NULL) {
> PageDirectoryEntry = AllocatePageTableMemory (1);
> ASSERT(PageDirectoryEntry != NULL); @@ -213,14 +216,14 @@
> SetStaticPageTable (
> //
> // Fill in a Page Directory Pointer Entries
> //
> - *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry |
> PAGE_ATTRIBUTE_BITS;
> + *PageDirectoryPointerEntry =
> + (UINT64)(UINTN)PageDirectoryEntry | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> }
>
> for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries <
> 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress +=
> SIZE_2MB) {
> //
> // Fill in the Page Directory entries
> //
> - *PageDirectoryEntry = (UINT64)PageAddress | IA32_PG_PS |
> PAGE_ATTRIBUTE_BITS;
> + *PageDirectoryEntry = PageAddress | mAddressEncMask |
> + IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
> }
> }
> }
> @@ -276,7 +279,7 @@ SmmInitPageTable (
> //
> PTEntry = (UINT64*)AllocatePageTableMemory (1);
> ASSERT (PTEntry != NULL);
> - *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
> + *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
>
> //
> @@ -457,7 +460,7 @@ ReclaimPages (
> //
> continue;
> }
> - Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & gPhyMask);
> + Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & ~mAddressEncMask &
> + gPhyMask);
> PML4EIgnore = FALSE;
> for (PdptIndex = 0; PdptIndex < EFI_PAGE_SIZE / sizeof (*Pdpt);
> PdptIndex++) {
> if ((Pdpt[PdptIndex] & IA32_PG_P) == 0 || (Pdpt[PdptIndex] &
> IA32_PG_PMNT) != 0) { @@ -478,7 +481,7 @@ ReclaimPages (
> // we will not check PML4 entry more
> //
> PML4EIgnore = TRUE;
> - Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & gPhyMask);
> + Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & ~mAddressEncMask &
> + gPhyMask);
> PDPTEIgnore = FALSE;
> for (PdtIndex = 0; PdtIndex < EFI_PAGE_SIZE / sizeof(*Pdt);
> PdtIndex++) {
> if ((Pdt[PdtIndex] & IA32_PG_P) == 0 || (Pdt[PdtIndex] &
> IA32_PG_PMNT) != 0) { @@ -560,7 +563,7 @@ ReclaimPages (
> //
> // Secondly, insert the page pointed by this entry into page pool and clear
> this entry
> //
> - InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress
> & gPhyMask));
> + InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress
> + & ~mAddressEncMask & gPhyMask));
> *ReleasePageAddress = 0;
>
> //
> @@ -572,14 +575,14 @@ ReclaimPages (
> //
> // If 4 KByte Page Table is released, check the PDPT entry
> //
> - Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & gPhyMask);
> + Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & ~mAddressEncMask &
> + gPhyMask);
> SubEntriesNum = GetSubEntriesNum(Pdpt + MinPdpt);
> if (SubEntriesNum == 0) {
> //
> // Release the empty Page Directory table if there was no more 4 KByte
> Page Table entry
> // clear the Page directory entry
> //
> - InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt] &
> gPhyMask));
> + InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt]
> + & ~mAddressEncMask & gPhyMask));
> Pdpt[MinPdpt] = 0;
> //
> // Go on checking the PML4 table @@ -603,7 +606,7 @@ ReclaimPages (
> // Release the empty PML4 table if there was no more 1G KByte Page
> Table entry
> // clear the Page directory entry
> //
> - InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4] &
> gPhyMask));
> + InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4]
> + & ~mAddressEncMask & gPhyMask));
> Pml4[MinPml4] = 0;
> MinPdpt = (UINTN)-1;
> continue;
> @@ -747,7 +750,7 @@ SmiDefaultPFHandler (
> //
> // If the entry is not present, allocate one page from page pool for it
> //
> - PageTable[PTIndex] = AllocPage () | PAGE_ATTRIBUTE_BITS;
> + PageTable[PTIndex] = AllocPage () | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> } else {
> //
> // Save the upper entry address @@ -760,7 +763,7 @@
> SmiDefaultPFHandler (
> //
> PageTable[PTIndex] |= (UINT64)IA32_PG_A;
> SetAccNum (PageTable + PTIndex, 7);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> + ~mAddressEncMask & gPhyMask);
> }
>
> PTIndex = BitFieldRead64 (PFAddress, StartBit, StartBit + 8); @@ -776,7
> +779,7 @@ SmiDefaultPFHandler (
> //
> // Fill the new entry
> //
> - PageTable[PTIndex] = (PFAddress & gPhyMask & ~((1ull << EndBit) - 1)) |
> + PageTable[PTIndex] = ((PFAddress | mAddressEncMask) & gPhyMask &
> + ~((1ull << EndBit) - 1)) |
> PageAttribute | IA32_PG_A | PAGE_ATTRIBUTE_BITS;
> if (UpperEntry != NULL) {
> SetSubEntriesNum (UpperEntry, GetSubEntriesNum (UpperEntry) + 1);
> @@ -927,7 +930,7 @@ SetPageTableAttributes (
> PageTableSplitted = (PageTableSplitted || IsSplitted);
>
> for (Index4 = 0; Index4 < SIZE_4KB/sizeof(UINT64); Index4++) {
> - L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> PAGING_4K_ADDRESS_MASK_64);
> + L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L3PageTable == NULL) {
> continue;
> }
> @@ -940,7 +943,7 @@ SetPageTableAttributes (
> // 1G
> continue;
> }
> - L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> PAGING_4K_ADDRESS_MASK_64);
> + L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L2PageTable == NULL) {
> continue;
> }
> @@ -953,7 +956,7 @@ SetPageTableAttributes (
> // 2M
> continue;
> }
> - L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> PAGING_4K_ADDRESS_MASK_64);
> + L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L1PageTable == NULL) {
> continue;
> }
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
> index cc393dc..37da5fb 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
> @@ -2,6 +2,8 @@
> X64 processor specific functions to enable SMM profile.
>
> Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -52,7 +54,7 @@ InitSmmS3Cr3 (
> //
> PTEntry = (UINT64*)AllocatePageTableMemory (1);
> ASSERT (PTEntry != NULL);
> - *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
> + *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
>
> //
> @@ -111,14 +113,14 @@ AcquirePage (
> //
> // Cut the previous uplink if it exists and wasn't overwritten
> //
> - if ((mPFPageUplink[mPFPageIndex] != NULL) &&
> ((*mPFPageUplink[mPFPageIndex] & PHYSICAL_ADDRESS_MASK) ==
> Address)) {
> + if ((mPFPageUplink[mPFPageIndex] != NULL) &&
> + ((*mPFPageUplink[mPFPageIndex] & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK) == Address)) {
> *mPFPageUplink[mPFPageIndex] = 0;
> }
>
> //
> // Link & Record the current uplink
> //
> - *Uplink = Address | PAGE_ATTRIBUTE_BITS;
> + *Uplink = Address | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> mPFPageUplink[mPFPageIndex] = Uplink;
>
> mPFPageIndex = (mPFPageIndex + 1) % MAX_PF_PAGE_COUNT; @@ -
> 168,33 +170,33 @@ RestorePageTableAbove4G (
> PTIndex = BitFieldRead64 (PFAddress, 39, 47);
> if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
> // PML4E
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> ~mAddressEncMask
> + & PHYSICAL_ADDRESS_MASK);
> PTIndex = BitFieldRead64 (PFAddress, 30, 38);
> if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
> // PDPTE
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> + ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
> PTIndex = BitFieldRead64 (PFAddress, 21, 29);
> // PD
> if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
> //
> // 2MB page
> //
> - Address = (UINT64)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
> - if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)) ==
> ((PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
> + Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask &
> PHYSICAL_ADDRESS_MASK);
> + if ((Address & ~((1ull << 21) - 1)) == ((PFAddress &
> + PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
> Existed = TRUE;
> }
> } else {
> //
> // 4KB page
> //
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> + ~mAddressEncMask& PHYSICAL_ADDRESS_MASK);
> if (PageTable != 0) {
> //
> // When there is a valid entry to map to 4KB page, need not create a
> new entry to map 2MB.
> //
> PTIndex = BitFieldRead64 (PFAddress, 12, 20);
> - Address = (UINT64)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> - if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1)) ==
> (PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
> + Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask &
> PHYSICAL_ADDRESS_MASK);
> + if ((Address & ~((1ull << 12) - 1)) == (PFAddress &
> + PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
> Existed = TRUE;
> }
> }
> @@ -227,13 +229,13 @@ RestorePageTableAbove4G (
> PFAddress = AsmReadCr2 ();
> // PML4E
> PTIndex = BitFieldRead64 (PFAddress, 39, 47);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> ~mAddressEncMask
> + & PHYSICAL_ADDRESS_MASK);
> // PDPTE
> PTIndex = BitFieldRead64 (PFAddress, 30, 38);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> ~mAddressEncMask
> + & PHYSICAL_ADDRESS_MASK);
> // PD
> PTIndex = BitFieldRead64 (PFAddress, 21, 29);
> - Address = PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK;
> + Address = PageTable[PTIndex] & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK;
> //
> // Check if 2MB-page entry need be changed to 4KB-page entry.
> //
> @@ -241,9 +243,9 @@ RestorePageTableAbove4G (
> AcquirePage (&PageTable[PTIndex]);
>
> // PTE
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> + ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
> for (Index = 0; Index < 512; Index++) {
> - PageTable[Index] = Address | PAGE_ATTRIBUTE_BITS;
> + PageTable[Index] = Address | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> if (!IsAddressValid (Address, &Nx)) {
> PageTable[Index] = PageTable[Index] &
> (INTN)(INT32)(~PAGE_ATTRIBUTE_BITS);
> }
> --
> 2.7.4
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 1/6] MdeModulePkg: Add PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-27 14:12 ` Duran, Leo
@ 2017-02-28 0:59 ` Zeng, Star
0 siblings, 0 replies; 15+ messages in thread
From: Zeng, Star @ 2017-02-28 0:59 UTC (permalink / raw)
To: Duran, Leo, edk2-devel@ml01.01.org
Cc: Singh, Brijesh, Tian, Feng, Laszlo Ersek, Zeng, Star
Reviewed-by: Star Zeng <star.zeng@intel.com> to MdeModulePkg changes.
Thanks,
Star
-----Original Message-----
From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Duran, Leo
Sent: Monday, February 27, 2017 10:13 PM
To: Zeng, Star <star.zeng@intel.com>; edk2-devel@ml01.01.org
Cc: Singh, Brijesh <brijesh.singh@amd.com>; Tian, Feng <feng.tian@intel.com>; Laszlo Ersek <lersek@redhat.com>
Subject: Re: [edk2] [PATCH v4 1/6] MdeModulePkg: Add PCD PcdPteMemoryEncryptionAddressOrMask
Please see below.
> -----Original Message-----
> From: Zeng, Star [mailto:star.zeng@intel.com]
> Sent: Sunday, February 26, 2017 8:20 PM
> To: Duran, Leo <leo.duran@amd.com>; edk2-devel@ml01.01.org
> Cc: Tian, Feng <feng.tian@intel.com>; Laszlo Ersek
> <lersek@redhat.com>; Singh, Brijesh <brijesh.singh@amd.com>; Zeng,
> Star <star.zeng@intel.com>
> Subject: RE: [PATCH v4 1/6] MdeModulePkg: Add PCD
> PcdPteMemoryEncryptionAddressOrMask
>
> We saw you defined 4K/2M/1G in previous patch series, #define
> PAGING_4K_ADDRESS_MASK_64 0x000FFFFFFFFFF000ull #define
> PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull #define
> PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull But only 1G mask is
> defined and used in this patch series, is that on purpose?
> #define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
>
> That means PcdPteMemoryEncryptionAddressOrMask will be just valid as
> 1G aligned, right?
>
> Thanks,
> Star
[Duran, Leo] Correct... The mask *must* allow for 1G pages, so I've simplified the logic.
> -----Original Message-----
> From: Leo Duran [mailto:leo.duran@amd.com]
> Sent: Monday, February 27, 2017 1:43 AM
> To: edk2-devel@ml01.01.org
> Cc: Leo Duran <leo.duran@amd.com>; Tian, Feng <feng.tian@intel.com>;
> Zeng, Star <star.zeng@intel.com>; Laszlo Ersek <lersek@redhat.com>;
> Brijesh Singh <brijesh.singh@amd.com>
> Subject: [PATCH v4 1/6] MdeModulePkg: Add PCD
> PcdPteMemoryEncryptionAddressOrMask
>
> This PCD holds the address mask for page table entries when memory
> encryption is enabled on AMD processors supporting the Secure
> Encrypted Virtualization (SEV) feature.
>
> Cc: Feng Tian <feng.tian@intel.com>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Leo Duran <leo.duran@amd.com>
> Reviewed-by: Star Zeng <star.zeng@intel.com>
> ---
> MdeModulePkg/MdeModulePkg.dec | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/MdeModulePkg/MdeModulePkg.dec
> b/MdeModulePkg/MdeModulePkg.dec index 426634f..f45ca84 100644
> --- a/MdeModulePkg/MdeModulePkg.dec
> +++ b/MdeModulePkg/MdeModulePkg.dec
> @@ -6,6 +6,8 @@
> # Copyright (c) 2007 - 2017, Intel Corporation. All rights
> reserved.<BR> # Copyright (c) 2016, Linaro Ltd. All rights
> reserved.<BR> # (C) Copyright 2016 Hewlett Packard Enterprise
> Development LP<BR>
> +# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR> #
> # This program and the accompanying materials are licensed and made
> available under # the terms and conditions of the BSD License that
> accompanies this distribution.
> # The full text of the license may be found at @@ -1702,6 +1704,12 @@
> # @Prompt A list of system FMP ImageTypeId GUIDs
>
> gEfiMdeModulePkgTokenSpaceGuid.PcdSystemFmpCapsuleImageTypeIdGu
> id|{0x0}|VOID*|0x30001046
>
> + ## This PCD holds the address mask for page table entries when
> + memory encryption is # enabled on AMD processors supporting the
> + Secure
> Encrypted Virtualization (SEV) feature.
> + # This mask should be applied when creating 1:1 virtual to
> + physical
> mapping tables.
> + #
> +
> +
> gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM
> ask|0x0
> + |UINT64|0x30001047
> +
> [PcdsPatchableInModule]
> ## Specify memory size with page number for PEI code when
> # Loading Module at Fixed Address feature is enabled.
> --
> 2.7.4
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v4 4/6] UefiCpuPkg/Universal/Acpi/S3Resume2Pei: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 ` [PATCH v4 4/6] UefiCpuPkg/Universal/Acpi/S3Resume2Pei: " Leo Duran
@ 2017-02-28 8:12 ` Fan, Jeff
0 siblings, 0 replies; 15+ messages in thread
From: Fan, Jeff @ 2017-02-28 8:12 UTC (permalink / raw)
To: Leo Duran, edk2-devel@ml01.01.org
Cc: Tian, Feng, Zeng, Star, Laszlo Ersek, Brijesh Singh
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
-----Original Message-----
From: Leo Duran [mailto:leo.duran@amd.com]
Sent: Monday, February 27, 2017 1:43 AM
To: edk2-devel@ml01.01.org
Cc: Leo Duran; Fan, Jeff; Tian, Feng; Zeng, Star; Laszlo Ersek; Brijesh Singh
Subject: [PATCH v4 4/6] UefiCpuPkg/Universal/Acpi/S3Resume2Pei: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
This PCD holds the address mask for page table entries when memory encryption is enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature.
The mask is applied when page tables are created (S3Resume.c).
CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
---
UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c | 17 +++++++++++++----
UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf | 2 ++
2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c b/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c
index d306fba..a9d1042 100644
--- a/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c
+++ b/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c
@@ -5,6 +5,7 @@
control is passed to OS waking up handler.
Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+ Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions @@ -58,6 +59,8 @@ #define STACK_ALIGN_DOWN(Ptr) \
((UINTN)(Ptr) & ~(UINTN)(CPU_STACK_ALIGNMENT - 1))
+#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
+
#pragma pack(1)
typedef union {
struct {
@@ -614,6 +617,12 @@ RestoreS3PageTables (
VOID *Hob;
BOOLEAN Page1GSupport;
PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry;
+ UINT64 AddressEncMask;
+
+ //
+ // Make sure AddressEncMask is contained to smallest supported address field
+ //
+ AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) &
+ PAGING_1G_ADDRESS_MASK_64;
//
// NOTE: We have to ASSUME the page table generation format, because we do not know whole page table information.
@@ -696,7 +705,7 @@ RestoreS3PageTables (
//
// Make a PML4 Entry
//
- PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry;
+ PageMapLevel4Entry->Uint64 =
+ (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask;
PageMapLevel4Entry->Bits.ReadWrite = 1;
PageMapLevel4Entry->Bits.Present = 1;
@@ -707,7 +716,7 @@ RestoreS3PageTables (
//
// Fill in the Page Directory entries
//
- PageDirectory1GEntry->Uint64 = (UINT64)PageAddress;
+ PageDirectory1GEntry->Uint64 = (UINT64)PageAddress |
+ AddressEncMask;
PageDirectory1GEntry->Bits.ReadWrite = 1;
PageDirectory1GEntry->Bits.Present = 1;
PageDirectory1GEntry->Bits.MustBe1 = 1; @@ -724,7 +733,7 @@ RestoreS3PageTables (
//
// Fill in a Page Directory Pointer Entries
//
- PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry;
+ PageDirectoryPointerEntry->Uint64 =
+ (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask;
PageDirectoryPointerEntry->Bits.ReadWrite = 1;
PageDirectoryPointerEntry->Bits.Present = 1;
@@ -732,7 +741,7 @@ RestoreS3PageTables (
//
// Fill in the Page Directory entries
//
- PageDirectoryEntry->Uint64 = (UINT64)PageAddress;
+ PageDirectoryEntry->Uint64 = (UINT64)PageAddress |
+ AddressEncMask;
PageDirectoryEntry->Bits.ReadWrite = 1;
PageDirectoryEntry->Bits.Present = 1;
PageDirectoryEntry->Bits.MustBe1 = 1; diff --git a/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf b/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf
index 73aeca3..d514523 100644
--- a/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf
+++ b/UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf
@@ -6,6 +6,7 @@
# control is passed to OS waking up handler.
#
# Copyright (c) 2010 - 2014, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
#
# This program and the accompanying materials are # licensed and made available under the terms and conditions of the BSD License @@ -91,6 +92,7 @@
[Pcd]
gEfiMdeModulePkgTokenSpaceGuid.PcdUse1GPageTable ## SOMETIMES_CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
[Depex]
TRUE
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 ` [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: " Leo Duran
2017-02-27 7:51 ` Fan, Jeff
@ 2017-02-28 8:12 ` Fan, Jeff
1 sibling, 0 replies; 15+ messages in thread
From: Fan, Jeff @ 2017-02-28 8:12 UTC (permalink / raw)
To: Leo Duran, edk2-devel@ml01.01.org
Cc: Tian, Feng, Zeng, Star, Laszlo Ersek, Brijesh Singh
Reviewed-by: Jeff Fan <jeff.fan@intel.com>
-----Original Message-----
From: Leo Duran [mailto:leo.duran@amd.com]
Sent: Monday, February 27, 2017 1:43 AM
To: edk2-devel@ml01.01.org
Cc: Leo Duran; Fan, Jeff; Tian, Feng; Zeng, Star; Laszlo Ersek; Brijesh Singh
Subject: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
This PCD holds the address mask for page table entries when memory encryption is enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature.
The mask is applied when page tables entriees are created or modified.
CC: Jeff Fan <jeff.fan@intel.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Star Zeng <star.zeng@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Leo Duran <leo.duran@amd.com>
Reviewed-by: Star Zeng <star.zeng@intel.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 6 +-
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 83 +++-------------------
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 14 ++++
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 8 ++-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 2 +
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 14 ++--
UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 16 +++--
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 41 ++++++-----
UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c | 32 +++++----
9 files changed, 91 insertions(+), 125 deletions(-) mode change 100644 => 100755 UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
index c1f4b7e..119810a 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
@@ -2,6 +2,8 @@
Page table manipulation functions for IA-32 processors
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -204,7 +206,7 @@ SetPageTableAttributes (
PageTableSplitted = (PageTableSplitted || IsSplitted);
for (Index3 = 0; Index3 < 4; Index3++) {
- L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L2PageTable == NULL) {
continue;
}
@@ -217,7 +219,7 @@ SetPageTableAttributes (
// 2M
continue;
}
- L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L1PageTable == NULL) {
continue;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
index c7aa48b..d99ad46 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
@@ -2,6 +2,8 @@
SMM MP service implementation
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -781,7 +783,8 @@ Gen4GPageTable (
// Set Page Directory Pointers
//
for (Index = 0; Index < 4; Index++) {
- Pte[Index] = (UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1) + (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS : PAGE_ATTRIBUTE_BITS);
+ Pte[Index] = (UINT64)((UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1)) | mAddressEncMask |
+ (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS :
+ PAGE_ATTRIBUTE_BITS);
}
Pte += EFI_PAGE_SIZE / sizeof (*Pte);
@@ -789,7 +792,7 @@ Gen4GPageTable (
// Fill in Page Directory Entries
//
for (Index = 0; Index < EFI_PAGE_SIZE * 4 / sizeof (*Pte); Index++) {
- Pte[Index] = (Index << 21) | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
+ Pte[Index] = (Index << 21) | mAddressEncMask | IA32_PG_PS |
+ PAGE_ATTRIBUTE_BITS;
}
if (FeaturePcdGet (PcdCpuSmmStackGuard)) { @@ -797,8 +800,8 @@ Gen4GPageTable (
GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE;
Pdpte = (UINT64*)PageTable;
for (PageIndex = Low2MBoundary; PageIndex <= High2MBoundary; PageIndex += SIZE_2MB) {
- Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30, 31)] & ~(EFI_PAGE_SIZE - 1));
- Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages | PAGE_ATTRIBUTE_BITS;
+ Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30, 31)] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
+ Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages |
+ mAddressEncMask | PAGE_ATTRIBUTE_BITS;
//
// Fill in Page Table Entries
//
@@ -809,13 +812,13 @@ Gen4GPageTable (
//
// Mark the guard page as non-present
//
- Pte[Index] = PageAddress;
+ Pte[Index] = PageAddress | mAddressEncMask;
GuardPage += mSmmStackSize;
if (GuardPage > mSmmStackArrayEnd) {
GuardPage = 0;
}
} else {
- Pte[Index] = PageAddress | PAGE_ATTRIBUTE_BITS;
+ Pte[Index] = PageAddress | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
}
PageAddress+= EFI_PAGE_SIZE;
}
@@ -827,74 +830,6 @@ Gen4GPageTable (
}
/**
- Set memory cache ability.
-
- @param PageTable PageTable Address
- @param Address Memory Address to change cache ability
- @param Cacheability Cache ability to set
-
-**/
-VOID
-SetCacheability (
- IN UINT64 *PageTable,
- IN UINTN Address,
- IN UINT8 Cacheability
- )
-{
- UINTN PTIndex;
- VOID *NewPageTableAddress;
- UINT64 *NewPageTable;
- UINTN Index;
-
- ASSERT ((Address & EFI_PAGE_MASK) == 0);
-
- if (sizeof (UINTN) == sizeof (UINT64)) {
- PTIndex = (UINTN)RShiftU64 (Address, 39) & 0x1ff;
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
- }
-
- PTIndex = (UINTN)RShiftU64 (Address, 30) & 0x1ff;
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
-
- //
- // A perfect implementation should check the original cacheability with the
- // one being set, and break a 2M page entry into pieces only when they
- // disagreed.
- //
- PTIndex = (UINTN)RShiftU64 (Address, 21) & 0x1ff;
- if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
- //
- // Allocate a page from SMRAM
- //
- NewPageTableAddress = AllocatePageTableMemory (1);
- ASSERT (NewPageTableAddress != NULL);
-
- NewPageTable = (UINT64 *)NewPageTableAddress;
-
- for (Index = 0; Index < 0x200; Index++) {
- NewPageTable[Index] = PageTable[PTIndex];
- if ((NewPageTable[Index] & IA32_PG_PAT_2M) != 0) {
- NewPageTable[Index] &= ~((UINT64)IA32_PG_PAT_2M);
- NewPageTable[Index] |= (UINT64)IA32_PG_PAT_4K;
- }
- NewPageTable[Index] |= (UINT64)(Index << EFI_PAGE_SHIFT);
- }
-
- PageTable[PTIndex] = ((UINTN)NewPageTableAddress & gPhyMask) | PAGE_ATTRIBUTE_BITS;
- }
-
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
-
- PTIndex = (UINTN)RShiftU64 (Address, 12) & 0x1ff;
- ASSERT (PageTable[PTIndex] & IA32_PG_P);
- PageTable[PTIndex] &= ~((UINT64)((IA32_PG_PAT_4K | IA32_PG_CD | IA32_PG_WT)));
- PageTable[PTIndex] |= (UINT64)Cacheability; -}
-
-/**
Schedule a procedure to run on the specified CPU.
@param[in] Procedure The address of the procedure to run
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
old mode 100644
new mode 100755
index fc7714a..d5b8900
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
@@ -2,6 +2,8 @@
Agent Module to load other modules to deploy SMM Entry Vector for X86 CPU.
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -97,6 +99,11 @@ BOOLEAN mSmmReadyToLock = FALSE;
BOOLEAN mSmmCodeAccessCheckEnable = FALSE;
//
+// Global copy of the PcdPteMemoryEncryptionAddressOrMask
+//
+UINT64 mAddressEncMask = 0;
+
+//
// Spin lock used to serialize setting of SMM Code Access Check feature //
SPIN_LOCK *mConfigSmmCodeAccessCheckLock = NULL;
@@ -605,6 +612,13 @@ PiCpuSmmEntry (
DEBUG ((EFI_D_INFO, "PcdCpuSmmCodeAccessCheckEnable = %d\n", mSmmCodeAccessCheckEnable));
//
+ // Save the PcdPteMemoryEncryptionAddressOrMask value into a global variable.
+ // Make sure AddressEncMask is contained to smallest supported address field.
+ //
+ mAddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) &
+ PAGING_1G_ADDRESS_MASK_64; DEBUG ((EFI_D_INFO, "mAddressEncMask =
+ 0x%lx\n", mAddressEncMask));
+
+ //
// If support CPU hot plug, we need to allocate resources for possibly hot-added processors
//
if (FeaturePcdGet (PcdCpuHotPlugSupport)) { diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index 69c54fb..71af2f1 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -2,6 +2,8 @@
Agent Module to load other modules to deploy SMM Entry Vector for X86 CPU.
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at
@@ -184,7 +186,6 @@ extern EFI_SMM_CPU_PROTOCOL mSmmCpu;
///
extern UINT8 mSmmSaveStateRegisterLma;
-
//
// SMM CPU Protocol function prototypes.
//
@@ -415,6 +416,11 @@ extern SPIN_LOCK *mPFLock;
extern SPIN_LOCK *mConfigSmmCodeAccessCheckLock;
extern SPIN_LOCK *mMemoryMappedLock;
+//
+// Copy of the PcdPteMemoryEncryptionAddressOrMask
+//
+extern UINT64 mAddressEncMask;
+
/**
Create 4G PageTable in SMRAM.
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
index d409edf..099792e 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
@@ -5,6 +5,7 @@
# provides CPU specific services in SMM.
#
# Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
#
# This program and the accompanying materials # are licensed and made available under the terms and conditions of the BSD License @@ -157,6 +158,7 @@
gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmSyncMode ## CONSUMES
gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStaticPageTable ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
[Depex]
gEfiMpServiceProtocolGuid
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
index 13323d5..a535389 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
@@ -119,7 +119,7 @@ GetPageTableEntry (
return NULL;
}
- L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] & PAGING_4K_ADDRESS_MASK_64);
+ L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
} else {
L3PageTable = (UINT64 *)GetPageTableBase ();
}
@@ -133,7 +133,7 @@ GetPageTableEntry (
return &L3PageTable[Index3];
}
- L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L2PageTable[Index2] == 0) {
*PageAttribute = PageNone;
return NULL;
@@ -145,7 +145,7 @@ GetPageTableEntry (
}
// 4k
- L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if ((L1PageTable[Index1] == 0) && (Address != 0)) {
*PageAttribute = PageNone;
return NULL;
@@ -304,9 +304,9 @@ SplitPage (
}
BaseAddress = *PageEntry & PAGING_2M_ADDRESS_MASK_64;
for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
- NewPageEntry[Index] = BaseAddress + SIZE_4KB * Index + ((*PageEntry) & PAGE_PROGATE_BITS);
+ NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) |
+ mAddressEncMask | ((*PageEntry) & PAGE_PROGATE_BITS);
}
- (*PageEntry) = (UINT64)(UINTN)NewPageEntry + PAGE_ATTRIBUTE_BITS;
+ (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
return RETURN_SUCCESS;
} else {
return RETURN_UNSUPPORTED;
@@ -325,9 +325,9 @@ SplitPage (
}
BaseAddress = *PageEntry & PAGING_1G_ADDRESS_MASK_64;
for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
- NewPageEntry[Index] = BaseAddress + SIZE_2MB * Index + IA32_PG_PS + ((*PageEntry) & PAGE_PROGATE_BITS);
+ NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) |
+ mAddressEncMask | IA32_PG_PS | ((*PageEntry) & PAGE_PROGATE_BITS);
}
- (*PageEntry) = (UINT64)(UINTN)NewPageEntry + PAGE_ATTRIBUTE_BITS;
+ (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
return RETURN_SUCCESS;
} else {
return RETURN_UNSUPPORTED;
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
index f53819e..1b84e2c 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
@@ -2,6 +2,8 @@
Enable SMM profile.
Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -513,7 +515,7 @@ InitPaging (
//
continue;
}
- Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
+ Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
} else {
Pde = (UINT64*)(UINTN)mSmmProfileCr3;
}
@@ -530,7 +532,7 @@ InitPaging (
//
continue;
}
- Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
+ Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
if (Pte == 0) {
continue;
}
@@ -557,9 +559,9 @@ InitPaging (
// Split it
for (Level4 = 0; Level4 < SIZE_4KB / sizeof(*Pt); Level4++) {
- Pt[Level4] = Address + ((Level4 << 12) | PAGE_ATTRIBUTE_BITS);
+ Pt[Level4] = Address + ((Level4 << 12) | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS);
} // end for PT
- *Pte = (UINTN)Pt | PAGE_ATTRIBUTE_BITS;
+ *Pte = (UINT64)(UINTN)Pt | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
} // end if IsAddressSplit
} // end for PTE
} // end for PDE
@@ -577,7 +579,7 @@ InitPaging (
//
continue;
}
- Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
+ Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
} else {
Pde = (UINT64*)(UINTN)mSmmProfileCr3;
}
@@ -597,7 +599,7 @@ InitPaging (
}
continue;
}
- Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
+ Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
if (Pte == 0) {
continue;
}
@@ -624,7 +626,7 @@ InitPaging (
}
} else {
// 4KB page
- Pt = (UINT64 *)(UINTN)(*Pte & PHYSICAL_ADDRESS_MASK);
+ Pt = (UINT64 *)(UINTN)(*Pte & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK);
if (Pt == 0) {
continue;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
index 17b2f4c..19b19d8 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
@@ -2,6 +2,8 @@
Page Fault (#PF) handler for X64 processors
Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -16,6 +18,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
#define PAGE_TABLE_PAGES 8
#define ACC_MAX_BIT BIT3
+
LIST_ENTRY mPagePool = INITIALIZE_LIST_HEAD_VARIABLE (mPagePool);
BOOLEAN m1GPageTableSupport = FALSE;
UINT8 mPhysicalAddressBits;
@@ -168,13 +171,13 @@ SetStaticPageTable (
//
// Each PML4 entry points to a page of Page Directory Pointer entries.
//
- PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) & gPhyMask);
+ PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) &
+ ~mAddressEncMask & gPhyMask);
if (PageDirectoryPointerEntry == NULL) {
PageDirectoryPointerEntry = AllocatePageTableMemory (1);
ASSERT(PageDirectoryPointerEntry != NULL);
ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE(1));
- *PageMapLevel4Entry = ((UINTN)PageDirectoryPointerEntry & gPhyMask) | PAGE_ATTRIBUTE_BITS;
+ *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry |
+ mAddressEncMask | PAGE_ATTRIBUTE_BITS;
}
if (m1GPageTableSupport) {
@@ -189,7 +192,7 @@ SetStaticPageTable (
//
// Fill in the Page Directory entries
//
- *PageDirectory1GEntry = (PageAddress & gPhyMask) | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
+ *PageDirectory1GEntry = PageAddress | mAddressEncMask |
+ IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
}
} else {
PageAddress = BASE_4GB;
@@ -204,7 +207,7 @@ SetStaticPageTable (
// Each Directory Pointer entries points to a page of Page Directory entires.
// So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop.
//
- PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) & gPhyMask);
+ PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) &
+ ~mAddressEncMask & gPhyMask);
if (PageDirectoryEntry == NULL) {
PageDirectoryEntry = AllocatePageTableMemory (1);
ASSERT(PageDirectoryEntry != NULL); @@ -213,14 +216,14 @@ SetStaticPageTable (
//
// Fill in a Page Directory Pointer Entries
//
- *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry | PAGE_ATTRIBUTE_BITS;
+ *PageDirectoryPointerEntry =
+ (UINT64)(UINTN)PageDirectoryEntry | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
}
for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) {
//
// Fill in the Page Directory entries
//
- *PageDirectoryEntry = (UINT64)PageAddress | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
+ *PageDirectoryEntry = PageAddress | mAddressEncMask |
+ IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
}
}
}
@@ -276,7 +279,7 @@ SmmInitPageTable (
//
PTEntry = (UINT64*)AllocatePageTableMemory (1);
ASSERT (PTEntry != NULL);
- *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
+ *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
//
@@ -457,7 +460,7 @@ ReclaimPages (
//
continue;
}
- Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & gPhyMask);
+ Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & ~mAddressEncMask &
+ gPhyMask);
PML4EIgnore = FALSE;
for (PdptIndex = 0; PdptIndex < EFI_PAGE_SIZE / sizeof (*Pdpt); PdptIndex++) {
if ((Pdpt[PdptIndex] & IA32_PG_P) == 0 || (Pdpt[PdptIndex] & IA32_PG_PMNT) != 0) { @@ -478,7 +481,7 @@ ReclaimPages (
// we will not check PML4 entry more
//
PML4EIgnore = TRUE;
- Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & gPhyMask);
+ Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & ~mAddressEncMask &
+ gPhyMask);
PDPTEIgnore = FALSE;
for (PdtIndex = 0; PdtIndex < EFI_PAGE_SIZE / sizeof(*Pdt); PdtIndex++) {
if ((Pdt[PdtIndex] & IA32_PG_P) == 0 || (Pdt[PdtIndex] & IA32_PG_PMNT) != 0) { @@ -560,7 +563,7 @@ ReclaimPages (
//
// Secondly, insert the page pointed by this entry into page pool and clear this entry
//
- InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress & gPhyMask));
+ InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress
+ & ~mAddressEncMask & gPhyMask));
*ReleasePageAddress = 0;
//
@@ -572,14 +575,14 @@ ReclaimPages (
//
// If 4 KByte Page Table is released, check the PDPT entry
//
- Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & gPhyMask);
+ Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & ~mAddressEncMask &
+ gPhyMask);
SubEntriesNum = GetSubEntriesNum(Pdpt + MinPdpt);
if (SubEntriesNum == 0) {
//
// Release the empty Page Directory table if there was no more 4 KByte Page Table entry
// clear the Page directory entry
//
- InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt] & gPhyMask));
+ InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt]
+ & ~mAddressEncMask & gPhyMask));
Pdpt[MinPdpt] = 0;
//
// Go on checking the PML4 table @@ -603,7 +606,7 @@ ReclaimPages (
// Release the empty PML4 table if there was no more 1G KByte Page Table entry
// clear the Page directory entry
//
- InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4] & gPhyMask));
+ InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4]
+ & ~mAddressEncMask & gPhyMask));
Pml4[MinPml4] = 0;
MinPdpt = (UINTN)-1;
continue;
@@ -747,7 +750,7 @@ SmiDefaultPFHandler (
//
// If the entry is not present, allocate one page from page pool for it
//
- PageTable[PTIndex] = AllocPage () | PAGE_ATTRIBUTE_BITS;
+ PageTable[PTIndex] = AllocPage () | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
} else {
//
// Save the upper entry address @@ -760,7 +763,7 @@ SmiDefaultPFHandler (
//
PageTable[PTIndex] |= (UINT64)IA32_PG_A;
SetAccNum (PageTable + PTIndex, 7);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
+ ~mAddressEncMask & gPhyMask);
}
PTIndex = BitFieldRead64 (PFAddress, StartBit, StartBit + 8); @@ -776,7 +779,7 @@ SmiDefaultPFHandler (
//
// Fill the new entry
//
- PageTable[PTIndex] = (PFAddress & gPhyMask & ~((1ull << EndBit) - 1)) |
+ PageTable[PTIndex] = ((PFAddress | mAddressEncMask) & gPhyMask &
+ ~((1ull << EndBit) - 1)) |
PageAttribute | IA32_PG_A | PAGE_ATTRIBUTE_BITS;
if (UpperEntry != NULL) {
SetSubEntriesNum (UpperEntry, GetSubEntriesNum (UpperEntry) + 1); @@ -927,7 +930,7 @@ SetPageTableAttributes (
PageTableSplitted = (PageTableSplitted || IsSplitted);
for (Index4 = 0; Index4 < SIZE_4KB/sizeof(UINT64); Index4++) {
- L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] & PAGING_4K_ADDRESS_MASK_64);
+ L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L3PageTable == NULL) {
continue;
}
@@ -940,7 +943,7 @@ SetPageTableAttributes (
// 1G
continue;
}
- L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L2PageTable == NULL) {
continue;
}
@@ -953,7 +956,7 @@ SetPageTableAttributes (
// 2M
continue;
}
- L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
+ ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
if (L1PageTable == NULL) {
continue;
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
index cc393dc..37da5fb 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
@@ -2,6 +2,8 @@
X64 processor specific functions to enable SMM profile.
Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
+
This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -52,7 +54,7 @@ InitSmmS3Cr3 (
//
PTEntry = (UINT64*)AllocatePageTableMemory (1);
ASSERT (PTEntry != NULL);
- *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
+ *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
//
@@ -111,14 +113,14 @@ AcquirePage (
//
// Cut the previous uplink if it exists and wasn't overwritten
//
- if ((mPFPageUplink[mPFPageIndex] != NULL) && ((*mPFPageUplink[mPFPageIndex] & PHYSICAL_ADDRESS_MASK) == Address)) {
+ if ((mPFPageUplink[mPFPageIndex] != NULL) &&
+ ((*mPFPageUplink[mPFPageIndex] & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK) == Address)) {
*mPFPageUplink[mPFPageIndex] = 0;
}
//
// Link & Record the current uplink
//
- *Uplink = Address | PAGE_ATTRIBUTE_BITS;
+ *Uplink = Address | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
mPFPageUplink[mPFPageIndex] = Uplink;
mPFPageIndex = (mPFPageIndex + 1) % MAX_PF_PAGE_COUNT; @@ -168,33 +170,33 @@ RestorePageTableAbove4G (
PTIndex = BitFieldRead64 (PFAddress, 39, 47);
if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
// PML4E
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask
+ & PHYSICAL_ADDRESS_MASK);
PTIndex = BitFieldRead64 (PFAddress, 30, 38);
if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
// PDPTE
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
+ ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
PTIndex = BitFieldRead64 (PFAddress, 21, 29);
// PD
if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
//
// 2MB page
//
- Address = (UINT64)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
- if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)) == ((PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
+ Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
+ if ((Address & ~((1ull << 21) - 1)) == ((PFAddress &
+ PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
Existed = TRUE;
}
} else {
//
// 4KB page
//
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
+ ~mAddressEncMask& PHYSICAL_ADDRESS_MASK);
if (PageTable != 0) {
//
// When there is a valid entry to map to 4KB page, need not create a new entry to map 2MB.
//
PTIndex = BitFieldRead64 (PFAddress, 12, 20);
- Address = (UINT64)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
- if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1)) == (PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
+ Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
+ if ((Address & ~((1ull << 12) - 1)) == (PFAddress &
+ PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
Existed = TRUE;
}
}
@@ -227,13 +229,13 @@ RestorePageTableAbove4G (
PFAddress = AsmReadCr2 ();
// PML4E
PTIndex = BitFieldRead64 (PFAddress, 39, 47);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask
+ & PHYSICAL_ADDRESS_MASK);
// PDPTE
PTIndex = BitFieldRead64 (PFAddress, 30, 38);
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & ~mAddressEncMask
+ & PHYSICAL_ADDRESS_MASK);
// PD
PTIndex = BitFieldRead64 (PFAddress, 21, 29);
- Address = PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK;
+ Address = PageTable[PTIndex] & ~mAddressEncMask &
+ PHYSICAL_ADDRESS_MASK;
//
// Check if 2MB-page entry need be changed to 4KB-page entry.
//
@@ -241,9 +243,9 @@ RestorePageTableAbove4G (
AcquirePage (&PageTable[PTIndex]);
// PTE
- PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
+ PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
+ ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
for (Index = 0; Index < 512; Index++) {
- PageTable[Index] = Address | PAGE_ATTRIBUTE_BITS;
+ PageTable[Index] = Address | mAddressEncMask |
+ PAGE_ATTRIBUTE_BITS;
if (!IsAddressValid (Address, &Nx)) {
PageTable[Index] = PageTable[Index] & (INTN)(INT32)(~PAGE_ATTRIBUTE_BITS);
}
--
2.7.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask
2017-02-26 17:43 [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask Leo Duran
` (5 preceding siblings ...)
2017-02-26 17:43 ` [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: " Leo Duran
@ 2017-03-01 4:56 ` Zeng, Star
6 siblings, 0 replies; 15+ messages in thread
From: Zeng, Star @ 2017-03-01 4:56 UTC (permalink / raw)
To: Leo Duran, edk2-devel@ml01.01.org; +Cc: Fan, Jeff, Zeng, Star
The patch series has been pushed at https://github.com/tianocore/edk2/compare/a9b4ee43d319...241f914975d5.
Thanks,
Star
-----Original Message-----
From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Leo Duran
Sent: Monday, February 27, 2017 1:43 AM
To: edk2-devel@ml01.01.org
Cc: Leo Duran <leo.duran@amd.com>
Subject: [edk2] [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask
This new PCD holds the address mask for page table entries when memory encryption is enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature.
This mask is be applied when creating or modifying page-table entries.
For example, the OvmfPkg would set the PCD when launching SEV-enabled guests.
Changes since v3:
- Break out changes to MdeModulePkg/Core/DxeIplPeim to a separate patch
- Add few cases of applying the mask that were previously missed
- Add PCD support for UefiCpuPkg/PiSmmCpuDxeSmm
Leo Duran (6):
MdeModulePkg: Add PCD PcdPteMemoryEncryptionAddressOrMask
MdeModulePkg/Core/DxeIplPeim: Add support for PCD
PcdPteMemoryEncryptionAddressOrMask
MdeModulePkg/Universal/CapsulePei: Add support for PCD
PcdPteMemoryEncryptionAddressOrMask
UefiCpuPkg/Universal/Acpi/S3Resume2Pei: Add support for PCD
PcdPteMemoryEncryptionAddressOrMask
MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe: Add support for
PCD PcdPteMemoryEncryptionAddressOrMask
UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD
PcdPteMemoryEncryptionAddressOrMask
MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 5 +-
MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c | 12 +++-
MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c | 39 +++++++---
MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h | 5 ++
MdeModulePkg/MdeModulePkg.dec | 8 +++
.../BootScriptExecutorDxe.inf | 2 +
.../Acpi/BootScriptExecutorDxe/ScriptExecute.c | 7 ++
.../Acpi/BootScriptExecutorDxe/ScriptExecute.h | 5 ++
.../Acpi/BootScriptExecutorDxe/X64/SetIdtEntry.c | 15 ++--
MdeModulePkg/Universal/CapsulePei/CapsulePei.inf | 2 +
.../Universal/CapsulePei/Common/CommonHeader.h | 5 ++
MdeModulePkg/Universal/CapsulePei/UefiCapsule.c | 17 +++--
MdeModulePkg/Universal/CapsulePei/X64/X64Entry.c | 24 +++++--
UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 6 +-
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 83 +++-------------------
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 14 ++++
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 8 ++-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 2 +
UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 14 ++--
UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 16 +++--
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 41 ++++++-----
UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c | 32 +++++----
UefiCpuPkg/Universal/Acpi/S3Resume2Pei/S3Resume.c | 17 +++--
.../Universal/Acpi/S3Resume2Pei/S3Resume2Pei.inf | 2 +
24 files changed, 224 insertions(+), 157 deletions(-) mode change 100644 => 100755 UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
--
2.7.4
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2017-03-01 4:56 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-02-26 17:43 [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask Leo Duran
2017-02-26 17:43 ` [PATCH v4 1/6] MdeModulePkg: " Leo Duran
2017-02-27 2:20 ` Zeng, Star
2017-02-27 14:12 ` Duran, Leo
2017-02-28 0:59 ` Zeng, Star
2017-02-26 17:43 ` [PATCH v4 2/6] MdeModulePkg/Core/DxeIplPeim: Add support for " Leo Duran
2017-02-26 17:43 ` [PATCH v4 3/6] MdeModulePkg/Universal/CapsulePei: " Leo Duran
2017-02-26 17:43 ` [PATCH v4 4/6] UefiCpuPkg/Universal/Acpi/S3Resume2Pei: " Leo Duran
2017-02-28 8:12 ` Fan, Jeff
2017-02-26 17:43 ` [PATCH v4 5/6] MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe: " Leo Duran
2017-02-26 17:43 ` [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: " Leo Duran
2017-02-27 7:51 ` Fan, Jeff
2017-02-27 14:15 ` Duran, Leo
2017-02-28 8:12 ` Fan, Jeff
2017-03-01 4:56 ` [PATCH v4 0/6] Add " Zeng, Star
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox