From: "Duran, Leo" <leo.duran@amd.com>
To: "Fan, Jeff" <jeff.fan@intel.com>,
"edk2-devel@ml01.01.org" <edk2-devel@ml01.01.org>
Cc: "Tian, Feng" <feng.tian@intel.com>,
"Zeng, Star" <star.zeng@intel.com>,
Laszlo Ersek <lersek@redhat.com>,
"Singh, Brijesh" <brijesh.singh@amd.com>
Subject: Re: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support for PCD PcdPteMemoryEncryptionAddressOrMask
Date: Mon, 27 Feb 2017 14:15:22 +0000 [thread overview]
Message-ID: <DM5PR12MB124362173FEFD046101B67E1F9570@DM5PR12MB1243.namprd12.prod.outlook.com> (raw)
In-Reply-To: <542CF652F8836A4AB8DBFAAD40ED192A4C54A176@shsmsx102.ccr.corp.intel.com>
Excellent, thanks.
Leo
> -----Original Message-----
> From: Fan, Jeff [mailto:jeff.fan@intel.com]
> Sent: Monday, February 27, 2017 1:51 AM
> To: Duran, Leo <leo.duran@amd.com>; edk2-devel@ml01.01.org
> Cc: Tian, Feng <feng.tian@intel.com>; Zeng, Star <star.zeng@intel.com>;
> Laszlo Ersek <lersek@redhat.com>; Singh, Brijesh <brijesh.singh@amd.com>
> Subject: RE: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support
> for PCD PcdPteMemoryEncryptionAddressOrMask
>
> Leo,
>
> I just saw your patch removed SetCacheability() also. I will drop my patch in
> https://www.mail-archive.com/edk2-devel@lists.01.org/msg22944.html :-)
>
> Thanks!
> Jeff
>
> -----Original Message-----
> From: Leo Duran [mailto:leo.duran@amd.com]
> Sent: Monday, February 27, 2017 1:43 AM
> To: edk2-devel@ml01.01.org
> Cc: Leo Duran; Fan, Jeff; Tian, Feng; Zeng, Star; Laszlo Ersek; Brijesh Singh
> Subject: [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: Add support for
> PCD PcdPteMemoryEncryptionAddressOrMask
>
> This PCD holds the address mask for page table entries when memory
> encryption is enabled on AMD processors supporting the Secure Encrypted
> Virtualization (SEV) feature.
>
> The mask is applied when page tables entriees are created or modified.
>
> CC: Jeff Fan <jeff.fan@intel.com>
> Cc: Feng Tian <feng.tian@intel.com>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Brijesh Singh <brijesh.singh@amd.com>
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Leo Duran <leo.duran@amd.com>
> Reviewed-by: Star Zeng <star.zeng@intel.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c | 6 +-
> UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 83 +++------------------
> -
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c | 14 ++++
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 8 ++-
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 2 +
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c | 14 ++--
> UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c | 16 +++--
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 41 ++++++-----
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c | 32 +++++----
> 9 files changed, 91 insertions(+), 125 deletions(-) mode change 100644 =>
> 100755 UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> index c1f4b7e..119810a 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/Ia32/PageTbl.c
> @@ -2,6 +2,8 @@
> Page table manipulation functions for IA-32 processors
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -204,7 +206,7 @@ SetPageTableAttributes (
> PageTableSplitted = (PageTableSplitted || IsSplitted);
>
> for (Index3 = 0; Index3 < 4; Index3++) {
> - L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> PAGING_4K_ADDRESS_MASK_64);
> + L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L2PageTable == NULL) {
> continue;
> }
> @@ -217,7 +219,7 @@ SetPageTableAttributes (
> // 2M
> continue;
> }
> - L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> PAGING_4K_ADDRESS_MASK_64);
> + L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L1PageTable == NULL) {
> continue;
> }
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> index c7aa48b..d99ad46 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> @@ -2,6 +2,8 @@
> SMM MP service implementation
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -781,7 +783,8 @@ Gen4GPageTable (
> // Set Page Directory Pointers
> //
> for (Index = 0; Index < 4; Index++) {
> - Pte[Index] = (UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1) +
> (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS :
> PAGE_ATTRIBUTE_BITS);
> + Pte[Index] = (UINT64)((UINTN)PageTable + EFI_PAGE_SIZE * (Index + 1))
> | mAddressEncMask |
> + (Is32BitPageTable ? IA32_PAE_PDPTE_ATTRIBUTE_BITS :
> + PAGE_ATTRIBUTE_BITS);
> }
> Pte += EFI_PAGE_SIZE / sizeof (*Pte);
>
> @@ -789,7 +792,7 @@ Gen4GPageTable (
> // Fill in Page Directory Entries
> //
> for (Index = 0; Index < EFI_PAGE_SIZE * 4 / sizeof (*Pte); Index++) {
> - Pte[Index] = (Index << 21) | IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
> + Pte[Index] = (Index << 21) | mAddressEncMask | IA32_PG_PS |
> + PAGE_ATTRIBUTE_BITS;
> }
>
> if (FeaturePcdGet (PcdCpuSmmStackGuard)) { @@ -797,8 +800,8 @@
> Gen4GPageTable (
> GuardPage = mSmmStackArrayBase + EFI_PAGE_SIZE;
> Pdpte = (UINT64*)PageTable;
> for (PageIndex = Low2MBoundary; PageIndex <= High2MBoundary;
> PageIndex += SIZE_2MB) {
> - Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30,
> 31)] & ~(EFI_PAGE_SIZE - 1));
> - Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages |
> PAGE_ATTRIBUTE_BITS;
> + Pte = (UINT64*)(UINTN)(Pdpte[BitFieldRead32 ((UINT32)PageIndex, 30,
> 31)] & ~mAddressEncMask & ~(EFI_PAGE_SIZE - 1));
> + Pte[BitFieldRead32 ((UINT32)PageIndex, 21, 29)] = (UINT64)Pages |
> + mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> //
> // Fill in Page Table Entries
> //
> @@ -809,13 +812,13 @@ Gen4GPageTable (
> //
> // Mark the guard page as non-present
> //
> - Pte[Index] = PageAddress;
> + Pte[Index] = PageAddress | mAddressEncMask;
> GuardPage += mSmmStackSize;
> if (GuardPage > mSmmStackArrayEnd) {
> GuardPage = 0;
> }
> } else {
> - Pte[Index] = PageAddress | PAGE_ATTRIBUTE_BITS;
> + Pte[Index] = PageAddress | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> }
> PageAddress+= EFI_PAGE_SIZE;
> }
> @@ -827,74 +830,6 @@ Gen4GPageTable (
> }
>
> /**
> - Set memory cache ability.
> -
> - @param PageTable PageTable Address
> - @param Address Memory Address to change cache ability
> - @param Cacheability Cache ability to set
> -
> -**/
> -VOID
> -SetCacheability (
> - IN UINT64 *PageTable,
> - IN UINTN Address,
> - IN UINT8 Cacheability
> - )
> -{
> - UINTN PTIndex;
> - VOID *NewPageTableAddress;
> - UINT64 *NewPageTable;
> - UINTN Index;
> -
> - ASSERT ((Address & EFI_PAGE_MASK) == 0);
> -
> - if (sizeof (UINTN) == sizeof (UINT64)) {
> - PTIndex = (UINTN)RShiftU64 (Address, 39) & 0x1ff;
> - ASSERT (PageTable[PTIndex] & IA32_PG_P);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
> - }
> -
> - PTIndex = (UINTN)RShiftU64 (Address, 30) & 0x1ff;
> - ASSERT (PageTable[PTIndex] & IA32_PG_P);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
> -
> - //
> - // A perfect implementation should check the original cacheability with the
> - // one being set, and break a 2M page entry into pieces only when they
> - // disagreed.
> - //
> - PTIndex = (UINTN)RShiftU64 (Address, 21) & 0x1ff;
> - if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
> - //
> - // Allocate a page from SMRAM
> - //
> - NewPageTableAddress = AllocatePageTableMemory (1);
> - ASSERT (NewPageTableAddress != NULL);
> -
> - NewPageTable = (UINT64 *)NewPageTableAddress;
> -
> - for (Index = 0; Index < 0x200; Index++) {
> - NewPageTable[Index] = PageTable[PTIndex];
> - if ((NewPageTable[Index] & IA32_PG_PAT_2M) != 0) {
> - NewPageTable[Index] &= ~((UINT64)IA32_PG_PAT_2M);
> - NewPageTable[Index] |= (UINT64)IA32_PG_PAT_4K;
> - }
> - NewPageTable[Index] |= (UINT64)(Index << EFI_PAGE_SHIFT);
> - }
> -
> - PageTable[PTIndex] = ((UINTN)NewPageTableAddress & gPhyMask) |
> PAGE_ATTRIBUTE_BITS;
> - }
> -
> - ASSERT (PageTable[PTIndex] & IA32_PG_P);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
> -
> - PTIndex = (UINTN)RShiftU64 (Address, 12) & 0x1ff;
> - ASSERT (PageTable[PTIndex] & IA32_PG_P);
> - PageTable[PTIndex] &= ~((UINT64)((IA32_PG_PAT_4K | IA32_PG_CD |
> IA32_PG_WT)));
> - PageTable[PTIndex] |= (UINT64)Cacheability; -}
> -
> -/**
> Schedule a procedure to run on the specified CPU.
>
> @param[in] Procedure The address of the procedure to run
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> old mode 100644
> new mode 100755
> index fc7714a..d5b8900
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.c
> @@ -2,6 +2,8 @@
> Agent Module to load other modules to deploy SMM Entry Vector for X86
> CPU.
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -97,6 +99,11 @@ BOOLEAN mSmmReadyToLock = FALSE;
> BOOLEAN mSmmCodeAccessCheckEnable = FALSE;
>
> //
> +// Global copy of the PcdPteMemoryEncryptionAddressOrMask
> +//
> +UINT64 mAddressEncMask = 0;
> +
> +//
> // Spin lock used to serialize setting of SMM Code Access Check feature //
> SPIN_LOCK *mConfigSmmCodeAccessCheckLock = NULL;
> @@ -605,6 +612,13 @@ PiCpuSmmEntry (
> DEBUG ((EFI_D_INFO, "PcdCpuSmmCodeAccessCheckEnable = %d\n",
> mSmmCodeAccessCheckEnable));
>
> //
> + // Save the PcdPteMemoryEncryptionAddressOrMask value into a global
> variable.
> + // Make sure AddressEncMask is contained to smallest supported address
> field.
> + //
> + mAddressEncMask = PcdGet64
> (PcdPteMemoryEncryptionAddressOrMask) &
> + PAGING_1G_ADDRESS_MASK_64; DEBUG ((EFI_D_INFO,
> "mAddressEncMask =
> + 0x%lx\n", mAddressEncMask));
> +
> + //
> // If support CPU hot plug, we need to allocate resources for possibly hot-
> added processors
> //
> if (FeaturePcdGet (PcdCpuHotPlugSupport)) { diff --git
> a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index 69c54fb..71af2f1 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -2,6 +2,8 @@
> Agent Module to load other modules to deploy SMM Entry Vector for X86
> CPU.
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -184,7 +186,6 @@ extern EFI_SMM_CPU_PROTOCOL mSmmCpu;
> ///
> extern UINT8 mSmmSaveStateRegisterLma;
>
> -
> //
> // SMM CPU Protocol function prototypes.
> //
> @@ -415,6 +416,11 @@ extern SPIN_LOCK *mPFLock;
> extern SPIN_LOCK *mConfigSmmCodeAccessCheckLock;
> extern SPIN_LOCK *mMemoryMappedLock;
>
> +//
> +// Copy of the PcdPteMemoryEncryptionAddressOrMask
> +//
> +extern UINT64 mAddressEncMask;
> +
> /**
> Create 4G PageTable in SMRAM.
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> index d409edf..099792e 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> @@ -5,6 +5,7 @@
> # provides CPU specific services in SMM.
> #
> # Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> #
> # This program and the accompanying materials # are licensed and made
> available under the terms and conditions of the BSD License @@ -157,6
> +158,7 @@
> gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmSyncMode ##
> CONSUMES
> gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStaticPageTable ##
> CONSUMES
> gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable ##
> CONSUMES
> +
> gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM
> ask ## CONSUMES
>
> [Depex]
> gEfiMpServiceProtocolGuid
> diff --git
> a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> index 13323d5..a535389 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmCpuMemoryManagement.c
> @@ -119,7 +119,7 @@ GetPageTableEntry (
> return NULL;
> }
>
> - L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> PAGING_4K_ADDRESS_MASK_64);
> + L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> } else {
> L3PageTable = (UINT64 *)GetPageTableBase ();
> }
> @@ -133,7 +133,7 @@ GetPageTableEntry (
> return &L3PageTable[Index3];
> }
>
> - L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> PAGING_4K_ADDRESS_MASK_64);
> + L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L2PageTable[Index2] == 0) {
> *PageAttribute = PageNone;
> return NULL;
> @@ -145,7 +145,7 @@ GetPageTableEntry (
> }
>
> // 4k
> - L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> PAGING_4K_ADDRESS_MASK_64);
> + L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if ((L1PageTable[Index1] == 0) && (Address != 0)) {
> *PageAttribute = PageNone;
> return NULL;
> @@ -304,9 +304,9 @@ SplitPage (
> }
> BaseAddress = *PageEntry & PAGING_2M_ADDRESS_MASK_64;
> for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
> - NewPageEntry[Index] = BaseAddress + SIZE_4KB * Index +
> ((*PageEntry) & PAGE_PROGATE_BITS);
> + NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) |
> + mAddressEncMask | ((*PageEntry) & PAGE_PROGATE_BITS);
> }
> - (*PageEntry) = (UINT64)(UINTN)NewPageEntry +
> PAGE_ATTRIBUTE_BITS;
> + (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> return RETURN_SUCCESS;
> } else {
> return RETURN_UNSUPPORTED;
> @@ -325,9 +325,9 @@ SplitPage (
> }
> BaseAddress = *PageEntry & PAGING_1G_ADDRESS_MASK_64;
> for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
> - NewPageEntry[Index] = BaseAddress + SIZE_2MB * Index +
> IA32_PG_PS + ((*PageEntry) & PAGE_PROGATE_BITS);
> + NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) |
> + mAddressEncMask | IA32_PG_PS | ((*PageEntry) &
> PAGE_PROGATE_BITS);
> }
> - (*PageEntry) = (UINT64)(UINTN)NewPageEntry +
> PAGE_ATTRIBUTE_BITS;
> + (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> return RETURN_SUCCESS;
> } else {
> return RETURN_UNSUPPORTED;
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> index f53819e..1b84e2c 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/SmmProfile.c
> @@ -2,6 +2,8 @@
> Enable SMM profile.
>
> Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -513,7 +515,7 @@ InitPaging (
> //
> continue;
> }
> - Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
> + Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> } else {
> Pde = (UINT64*)(UINTN)mSmmProfileCr3;
> }
> @@ -530,7 +532,7 @@ InitPaging (
> //
> continue;
> }
> - Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
> + Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> if (Pte == 0) {
> continue;
> }
> @@ -557,9 +559,9 @@ InitPaging (
>
> // Split it
> for (Level4 = 0; Level4 < SIZE_4KB / sizeof(*Pt); Level4++) {
> - Pt[Level4] = Address + ((Level4 << 12) | PAGE_ATTRIBUTE_BITS);
> + Pt[Level4] = Address + ((Level4 << 12) | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS);
> } // end for PT
> - *Pte = (UINTN)Pt | PAGE_ATTRIBUTE_BITS;
> + *Pte = (UINT64)(UINTN)Pt | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> } // end if IsAddressSplit
> } // end for PTE
> } // end for PDE
> @@ -577,7 +579,7 @@ InitPaging (
> //
> continue;
> }
> - Pde = (UINT64 *)(UINTN)(Pml4[Level1] & PHYSICAL_ADDRESS_MASK);
> + Pde = (UINT64 *)(UINTN)(Pml4[Level1] & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> } else {
> Pde = (UINT64*)(UINTN)mSmmProfileCr3;
> }
> @@ -597,7 +599,7 @@ InitPaging (
> }
> continue;
> }
> - Pte = (UINT64 *)(UINTN)(*Pde & PHYSICAL_ADDRESS_MASK);
> + Pte = (UINT64 *)(UINTN)(*Pde & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> if (Pte == 0) {
> continue;
> }
> @@ -624,7 +626,7 @@ InitPaging (
> }
> } else {
> // 4KB page
> - Pt = (UINT64 *)(UINTN)(*Pte & PHYSICAL_ADDRESS_MASK);
> + Pt = (UINT64 *)(UINTN)(*Pte & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK);
> if (Pt == 0) {
> continue;
> }
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> index 17b2f4c..19b19d8 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> @@ -2,6 +2,8 @@
> Page Fault (#PF) handler for X64 processors
>
> Copyright (c) 2009 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -16,6 +18,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
>
> #define PAGE_TABLE_PAGES 8
> #define ACC_MAX_BIT BIT3
> +
> LIST_ENTRY mPagePool = INITIALIZE_LIST_HEAD_VARIABLE
> (mPagePool);
> BOOLEAN m1GPageTableSupport = FALSE;
> UINT8 mPhysicalAddressBits;
> @@ -168,13 +171,13 @@ SetStaticPageTable (
> //
> // Each PML4 entry points to a page of Page Directory Pointer entries.
> //
> - PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) &
> gPhyMask);
> + PageDirectoryPointerEntry = (UINT64 *) ((*PageMapLevel4Entry) &
> + ~mAddressEncMask & gPhyMask);
> if (PageDirectoryPointerEntry == NULL) {
> PageDirectoryPointerEntry = AllocatePageTableMemory (1);
> ASSERT(PageDirectoryPointerEntry != NULL);
> ZeroMem (PageDirectoryPointerEntry, EFI_PAGES_TO_SIZE(1));
>
> - *PageMapLevel4Entry = ((UINTN)PageDirectoryPointerEntry &
> gPhyMask) | PAGE_ATTRIBUTE_BITS;
> + *PageMapLevel4Entry = (UINT64)(UINTN)PageDirectoryPointerEntry |
> + mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> }
>
> if (m1GPageTableSupport) {
> @@ -189,7 +192,7 @@ SetStaticPageTable (
> //
> // Fill in the Page Directory entries
> //
> - *PageDirectory1GEntry = (PageAddress & gPhyMask) | IA32_PG_PS |
> PAGE_ATTRIBUTE_BITS;
> + *PageDirectory1GEntry = PageAddress | mAddressEncMask |
> + IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
> }
> } else {
> PageAddress = BASE_4GB;
> @@ -204,7 +207,7 @@ SetStaticPageTable (
> // Each Directory Pointer entries points to a page of Page Directory
> entires.
> // So allocate space for them and fill them in in the
> IndexOfPageDirectoryEntries loop.
> //
> - PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) &
> gPhyMask);
> + PageDirectoryEntry = (UINT64 *) ((*PageDirectoryPointerEntry) &
> + ~mAddressEncMask & gPhyMask);
> if (PageDirectoryEntry == NULL) {
> PageDirectoryEntry = AllocatePageTableMemory (1);
> ASSERT(PageDirectoryEntry != NULL); @@ -213,14 +216,14 @@
> SetStaticPageTable (
> //
> // Fill in a Page Directory Pointer Entries
> //
> - *PageDirectoryPointerEntry = (UINT64)(UINTN)PageDirectoryEntry |
> PAGE_ATTRIBUTE_BITS;
> + *PageDirectoryPointerEntry =
> + (UINT64)(UINTN)PageDirectoryEntry | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> }
>
> for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries <
> 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress +=
> SIZE_2MB) {
> //
> // Fill in the Page Directory entries
> //
> - *PageDirectoryEntry = (UINT64)PageAddress | IA32_PG_PS |
> PAGE_ATTRIBUTE_BITS;
> + *PageDirectoryEntry = PageAddress | mAddressEncMask |
> + IA32_PG_PS | PAGE_ATTRIBUTE_BITS;
> }
> }
> }
> @@ -276,7 +279,7 @@ SmmInitPageTable (
> //
> PTEntry = (UINT64*)AllocatePageTableMemory (1);
> ASSERT (PTEntry != NULL);
> - *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
> + *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
>
> //
> @@ -457,7 +460,7 @@ ReclaimPages (
> //
> continue;
> }
> - Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & gPhyMask);
> + Pdpt = (UINT64*)(UINTN)(Pml4[Pml4Index] & ~mAddressEncMask &
> + gPhyMask);
> PML4EIgnore = FALSE;
> for (PdptIndex = 0; PdptIndex < EFI_PAGE_SIZE / sizeof (*Pdpt);
> PdptIndex++) {
> if ((Pdpt[PdptIndex] & IA32_PG_P) == 0 || (Pdpt[PdptIndex] &
> IA32_PG_PMNT) != 0) { @@ -478,7 +481,7 @@ ReclaimPages (
> // we will not check PML4 entry more
> //
> PML4EIgnore = TRUE;
> - Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & gPhyMask);
> + Pdt = (UINT64*)(UINTN)(Pdpt[PdptIndex] & ~mAddressEncMask &
> + gPhyMask);
> PDPTEIgnore = FALSE;
> for (PdtIndex = 0; PdtIndex < EFI_PAGE_SIZE / sizeof(*Pdt);
> PdtIndex++) {
> if ((Pdt[PdtIndex] & IA32_PG_P) == 0 || (Pdt[PdtIndex] &
> IA32_PG_PMNT) != 0) { @@ -560,7 +563,7 @@ ReclaimPages (
> //
> // Secondly, insert the page pointed by this entry into page pool and clear
> this entry
> //
> - InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress
> & gPhyMask));
> + InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(*ReleasePageAddress
> + & ~mAddressEncMask & gPhyMask));
> *ReleasePageAddress = 0;
>
> //
> @@ -572,14 +575,14 @@ ReclaimPages (
> //
> // If 4 KByte Page Table is released, check the PDPT entry
> //
> - Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & gPhyMask);
> + Pdpt = (UINT64*)(UINTN)(Pml4[MinPml4] & ~mAddressEncMask &
> + gPhyMask);
> SubEntriesNum = GetSubEntriesNum(Pdpt + MinPdpt);
> if (SubEntriesNum == 0) {
> //
> // Release the empty Page Directory table if there was no more 4 KByte
> Page Table entry
> // clear the Page directory entry
> //
> - InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt] &
> gPhyMask));
> + InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pdpt[MinPdpt]
> + & ~mAddressEncMask & gPhyMask));
> Pdpt[MinPdpt] = 0;
> //
> // Go on checking the PML4 table @@ -603,7 +606,7 @@ ReclaimPages (
> // Release the empty PML4 table if there was no more 1G KByte Page
> Table entry
> // clear the Page directory entry
> //
> - InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4] &
> gPhyMask));
> + InsertTailList (&mPagePool, (LIST_ENTRY*)(UINTN)(Pml4[MinPml4]
> + & ~mAddressEncMask & gPhyMask));
> Pml4[MinPml4] = 0;
> MinPdpt = (UINTN)-1;
> continue;
> @@ -747,7 +750,7 @@ SmiDefaultPFHandler (
> //
> // If the entry is not present, allocate one page from page pool for it
> //
> - PageTable[PTIndex] = AllocPage () | PAGE_ATTRIBUTE_BITS;
> + PageTable[PTIndex] = AllocPage () | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> } else {
> //
> // Save the upper entry address @@ -760,7 +763,7 @@
> SmiDefaultPFHandler (
> //
> PageTable[PTIndex] |= (UINT64)IA32_PG_A;
> SetAccNum (PageTable + PTIndex, 7);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] & gPhyMask);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> + ~mAddressEncMask & gPhyMask);
> }
>
> PTIndex = BitFieldRead64 (PFAddress, StartBit, StartBit + 8); @@ -776,7
> +779,7 @@ SmiDefaultPFHandler (
> //
> // Fill the new entry
> //
> - PageTable[PTIndex] = (PFAddress & gPhyMask & ~((1ull << EndBit) - 1)) |
> + PageTable[PTIndex] = ((PFAddress | mAddressEncMask) & gPhyMask &
> + ~((1ull << EndBit) - 1)) |
> PageAttribute | IA32_PG_A | PAGE_ATTRIBUTE_BITS;
> if (UpperEntry != NULL) {
> SetSubEntriesNum (UpperEntry, GetSubEntriesNum (UpperEntry) + 1);
> @@ -927,7 +930,7 @@ SetPageTableAttributes (
> PageTableSplitted = (PageTableSplitted || IsSplitted);
>
> for (Index4 = 0; Index4 < SIZE_4KB/sizeof(UINT64); Index4++) {
> - L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> PAGING_4K_ADDRESS_MASK_64);
> + L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L3PageTable == NULL) {
> continue;
> }
> @@ -940,7 +943,7 @@ SetPageTableAttributes (
> // 1G
> continue;
> }
> - L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> PAGING_4K_ADDRESS_MASK_64);
> + L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L2PageTable == NULL) {
> continue;
> }
> @@ -953,7 +956,7 @@ SetPageTableAttributes (
> // 2M
> continue;
> }
> - L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> PAGING_4K_ADDRESS_MASK_64);
> + L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> + ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> if (L1PageTable == NULL) {
> continue;
> }
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
> index cc393dc..37da5fb 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/SmmProfileArch.c
> @@ -2,6 +2,8 @@
> X64 processor specific functions to enable SMM profile.
>
> Copyright (c) 2012 - 2016, Intel Corporation. All rights reserved.<BR>
> +Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
> +
> This program and the accompanying materials are licensed and made
> available under the terms and conditions of the BSD License which
> accompanies this distribution. The full text of the license may be found at
> @@ -52,7 +54,7 @@ InitSmmS3Cr3 (
> //
> PTEntry = (UINT64*)AllocatePageTableMemory (1);
> ASSERT (PTEntry != NULL);
> - *PTEntry = Pages | PAGE_ATTRIBUTE_BITS;
> + *PTEntry = Pages | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> ZeroMem (PTEntry + 1, EFI_PAGE_SIZE - sizeof (*PTEntry));
>
> //
> @@ -111,14 +113,14 @@ AcquirePage (
> //
> // Cut the previous uplink if it exists and wasn't overwritten
> //
> - if ((mPFPageUplink[mPFPageIndex] != NULL) &&
> ((*mPFPageUplink[mPFPageIndex] & PHYSICAL_ADDRESS_MASK) ==
> Address)) {
> + if ((mPFPageUplink[mPFPageIndex] != NULL) &&
> + ((*mPFPageUplink[mPFPageIndex] & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK) == Address)) {
> *mPFPageUplink[mPFPageIndex] = 0;
> }
>
> //
> // Link & Record the current uplink
> //
> - *Uplink = Address | PAGE_ATTRIBUTE_BITS;
> + *Uplink = Address | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
> mPFPageUplink[mPFPageIndex] = Uplink;
>
> mPFPageIndex = (mPFPageIndex + 1) % MAX_PF_PAGE_COUNT; @@ -
> 168,33 +170,33 @@ RestorePageTableAbove4G (
> PTIndex = BitFieldRead64 (PFAddress, 39, 47);
> if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
> // PML4E
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> ~mAddressEncMask
> + & PHYSICAL_ADDRESS_MASK);
> PTIndex = BitFieldRead64 (PFAddress, 30, 38);
> if ((PageTable[PTIndex] & IA32_PG_P) != 0) {
> // PDPTE
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> + ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
> PTIndex = BitFieldRead64 (PFAddress, 21, 29);
> // PD
> if ((PageTable[PTIndex] & IA32_PG_PS) != 0) {
> //
> // 2MB page
> //
> - Address = (UINT64)(PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK);
> - if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)) ==
> ((PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
> + Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask &
> PHYSICAL_ADDRESS_MASK);
> + if ((Address & ~((1ull << 21) - 1)) == ((PFAddress &
> + PHYSICAL_ADDRESS_MASK & ~((1ull << 21) - 1)))) {
> Existed = TRUE;
> }
> } else {
> //
> // 4KB page
> //
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> + ~mAddressEncMask& PHYSICAL_ADDRESS_MASK);
> if (PageTable != 0) {
> //
> // When there is a valid entry to map to 4KB page, need not create a
> new entry to map 2MB.
> //
> PTIndex = BitFieldRead64 (PFAddress, 12, 20);
> - Address = (UINT64)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> - if ((Address & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1)) ==
> (PFAddress & PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
> + Address = (UINT64)(PageTable[PTIndex] & ~mAddressEncMask &
> PHYSICAL_ADDRESS_MASK);
> + if ((Address & ~((1ull << 12) - 1)) == (PFAddress &
> + PHYSICAL_ADDRESS_MASK & ~((1ull << 12) - 1))) {
> Existed = TRUE;
> }
> }
> @@ -227,13 +229,13 @@ RestorePageTableAbove4G (
> PFAddress = AsmReadCr2 ();
> // PML4E
> PTIndex = BitFieldRead64 (PFAddress, 39, 47);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> ~mAddressEncMask
> + & PHYSICAL_ADDRESS_MASK);
> // PDPTE
> PTIndex = BitFieldRead64 (PFAddress, 30, 38);
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> ~mAddressEncMask
> + & PHYSICAL_ADDRESS_MASK);
> // PD
> PTIndex = BitFieldRead64 (PFAddress, 21, 29);
> - Address = PageTable[PTIndex] & PHYSICAL_ADDRESS_MASK;
> + Address = PageTable[PTIndex] & ~mAddressEncMask &
> + PHYSICAL_ADDRESS_MASK;
> //
> // Check if 2MB-page entry need be changed to 4KB-page entry.
> //
> @@ -241,9 +243,9 @@ RestorePageTableAbove4G (
> AcquirePage (&PageTable[PTIndex]);
>
> // PTE
> - PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> PHYSICAL_ADDRESS_MASK);
> + PageTable = (UINT64*)(UINTN)(PageTable[PTIndex] &
> + ~mAddressEncMask & PHYSICAL_ADDRESS_MASK);
> for (Index = 0; Index < 512; Index++) {
> - PageTable[Index] = Address | PAGE_ATTRIBUTE_BITS;
> + PageTable[Index] = Address | mAddressEncMask |
> + PAGE_ATTRIBUTE_BITS;
> if (!IsAddressValid (Address, &Nx)) {
> PageTable[Index] = PageTable[Index] &
> (INTN)(INT32)(~PAGE_ATTRIBUTE_BITS);
> }
> --
> 2.7.4
next prev parent reply other threads:[~2017-02-27 14:15 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-26 17:43 [PATCH v4 0/6] Add PCD PcdPteMemoryEncryptionAddressOrMask Leo Duran
2017-02-26 17:43 ` [PATCH v4 1/6] MdeModulePkg: " Leo Duran
2017-02-27 2:20 ` Zeng, Star
2017-02-27 14:12 ` Duran, Leo
2017-02-28 0:59 ` Zeng, Star
2017-02-26 17:43 ` [PATCH v4 2/6] MdeModulePkg/Core/DxeIplPeim: Add support for " Leo Duran
2017-02-26 17:43 ` [PATCH v4 3/6] MdeModulePkg/Universal/CapsulePei: " Leo Duran
2017-02-26 17:43 ` [PATCH v4 4/6] UefiCpuPkg/Universal/Acpi/S3Resume2Pei: " Leo Duran
2017-02-28 8:12 ` Fan, Jeff
2017-02-26 17:43 ` [PATCH v4 5/6] MdeModulePkg/Universal/Acpi/BootScriptExecutorDxe: " Leo Duran
2017-02-26 17:43 ` [PATCH v4 6/6] UefiCpuPkg/PiSmmCpuDxeSmm: " Leo Duran
2017-02-27 7:51 ` Fan, Jeff
2017-02-27 14:15 ` Duran, Leo [this message]
2017-02-28 8:12 ` Fan, Jeff
2017-03-01 4:56 ` [PATCH v4 0/6] Add " Zeng, Star
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM5PR12MB124362173FEFD046101B67E1F9570@DM5PR12MB1243.namprd12.prod.outlook.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox