* [PATCH V3 0/4] Reserve shared memory for DMA operation
@ 2022-12-15 5:18 Min Xu
2022-12-15 5:18 ` [PATCH V3 1/4] OvmfPkg/IoMmuDxe: Reserve shared memory region " Min Xu
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Min Xu @ 2022-12-15 5:18 UTC (permalink / raw)
To: devel
Cc: Min Xu, Erdem Aktas, James Bottomley, Jiewen Yao, Gerd Hoffmann,
Michael D Kinney, Liming Gao, Tom Lendacky
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4171
This patch-set introduces the feature of reserving shared memory
for DMA operation. Its intention is to reduce the allocation and
conversion of private/shared memory, so that boot performance
can be improved significantly. Detailed information is in Patch#1.
Patch#2 renames AmdSevIoMmu.* to CcIoMmu.* because these 2 files
support both SEV and TDX guest.
Patch#3 is provided by Tom Lendacky which add SEV support for reserved
shared memory.
Patch#4 updates the related section in Maintainers.txt.
Code: https://github.com/mxu9/edk2/tree/IoMmu.v3
v3 changes:
- Check mReservedSharedMemSupported in IoMmuReleaseReservedSharedMem
before release the pre-allocated shared memory.
- Delete a line of comment in Patch#3 which is suggested by Tom Lendacky.
v2 changes:
- Add Patch#3 which is provided by Tom Lendacky. It adds SEV support for
reserved shared memory.
- Add more description for mReservedMemRanges. It describes:
1) How the pre-allocated memory is managed.
2) What if the pre-allocated memory is used up.
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
Min M Xu (3):
OvmfPkg/IoMmuDxe: Reserve shared memory region for DMA operation
OvmfPkg/IoMmuDxe: Rename AmdSevIoMmu to CcIoMmu
Maintainers: Update OvmfPkg/IoMmuDxe
Tom Lendacky (1):
OvmfPkg/IoMmuDxe: Add SEV support for reserved shared memory
Maintainers.txt | 2 +-
OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} | 204 ++++----
OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} | 0
OvmfPkg/IoMmuDxe/IoMmuBuffer.c | 495 ++++++++++++++++++
OvmfPkg/IoMmuDxe/IoMmuDxe.c | 2 +-
OvmfPkg/IoMmuDxe/IoMmuDxe.inf | 6 +-
OvmfPkg/IoMmuDxe/IoMmuInternal.h | 179 +++++++
7 files changed, 781 insertions(+), 107 deletions(-)
rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} (85%)
rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} (100%)
create mode 100644 OvmfPkg/IoMmuDxe/IoMmuBuffer.c
create mode 100644 OvmfPkg/IoMmuDxe/IoMmuInternal.h
--
2.29.2.windows.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH V3 1/4] OvmfPkg/IoMmuDxe: Reserve shared memory region for DMA operation
2022-12-15 5:18 [PATCH V3 0/4] Reserve shared memory for DMA operation Min Xu
@ 2022-12-15 5:18 ` Min Xu
2022-12-15 5:18 ` [PATCH V3 2/4] OvmfPkg/IoMmuDxe: Rename AmdSevIoMmu to CcIoMmu Min Xu
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Min Xu @ 2022-12-15 5:18 UTC (permalink / raw)
To: devel
Cc: Min M Xu, Erdem Aktas, James Bottomley, Jiewen Yao, Tom Lendacky,
Gerd Hoffmann, Jiewen Yao
From: Min M Xu <min.m.xu@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4171
A typical QEMU fw_cfg read bytes with IOMMU for td guest is that:
(QemuFwCfgReadBytes@QemuFwCfgLib.c is the example)
1) Allocate DMA Access buffer
2) Map actual data buffer
3) start the transfer and wait for the transfer to complete
4) Free DMA Access buffer
5) Un-map actual data buffer
In step 1/2, Private memories are allocated, converted to shared memories.
In Step 4/5 the shared memories are converted to private memories and
accepted again. The final step is to free the pages.
This is time-consuming and impacts td guest's boot perf (both direct boot
and grub boot) badly.
In a typical grub boot, there are about 5000 calls of page allocation and
private/share conversion. Most of page size is less than 32KB.
This patch allocates a memory region and initializes it into pieces of
memory with different sizes. A piece of such memory consists of 2 parts:
the first page is of private memory, and the other pages are shared
memory. This is to meet the layout of common buffer.
When allocating bounce buffer in IoMmuMap(), IoMmuAllocateBounceBuffer()
is called to allocate the buffer. Accordingly when freeing bounce buffer
in IoMmuUnmapWorker(), IoMmuFreeBounceBuffer() is called to free the
bounce buffer. CommonBuffer is allocated by IoMmuAllocateCommonBuffer
and accordingly freed by IoMmuFreeCommonBuffer.
This feature is tested in Intel TDX pre-production platform. It saves up
to hundreds of ms in a grub boot.
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
---
OvmfPkg/IoMmuDxe/AmdSevIoMmu.c | 142 +++++-----
OvmfPkg/IoMmuDxe/IoMmuBuffer.c | 465 +++++++++++++++++++++++++++++++
OvmfPkg/IoMmuDxe/IoMmuDxe.inf | 1 +
OvmfPkg/IoMmuDxe/IoMmuInternal.h | 179 ++++++++++++
4 files changed, 716 insertions(+), 71 deletions(-)
create mode 100644 OvmfPkg/IoMmuDxe/IoMmuBuffer.c
create mode 100644 OvmfPkg/IoMmuDxe/IoMmuInternal.h
diff --git a/OvmfPkg/IoMmuDxe/AmdSevIoMmu.c b/OvmfPkg/IoMmuDxe/AmdSevIoMmu.c
index 6b65897f032a..77e46bbf4a60 100644
--- a/OvmfPkg/IoMmuDxe/AmdSevIoMmu.c
+++ b/OvmfPkg/IoMmuDxe/AmdSevIoMmu.c
@@ -15,18 +15,7 @@
#include <Library/PcdLib.h>
#include <ConfidentialComputingGuestAttr.h>
#include "AmdSevIoMmu.h"
-
-#define MAP_INFO_SIG SIGNATURE_64 ('M', 'A', 'P', '_', 'I', 'N', 'F', 'O')
-
-typedef struct {
- UINT64 Signature;
- LIST_ENTRY Link;
- EDKII_IOMMU_OPERATION Operation;
- UINTN NumberOfBytes;
- UINTN NumberOfPages;
- EFI_PHYSICAL_ADDRESS CryptedAddress;
- EFI_PHYSICAL_ADDRESS PlainTextAddress;
-} MAP_INFO;
+#include "IoMmuInternal.h"
//
// List of the MAP_INFO structures that have been set up by IoMmuMap() and not
@@ -35,7 +24,10 @@ typedef struct {
//
STATIC LIST_ENTRY mMapInfos = INITIALIZE_LIST_HEAD_VARIABLE (mMapInfos);
-#define COMMON_BUFFER_SIG SIGNATURE_64 ('C', 'M', 'N', 'B', 'U', 'F', 'F', 'R')
+//
+// Indicate if the feature of reserved memory is supported in DMA operation.
+//
+BOOLEAN mReservedSharedMemSupported = FALSE;
//
// ASCII names for EDKII_IOMMU_OPERATION constants, for debug logging.
@@ -50,30 +42,6 @@ mBusMasterOperationName[EdkiiIoMmuOperationMaximum] = {
"CommonBuffer64"
};
-//
-// The following structure enables Map() and Unmap() to perform in-place
-// decryption and encryption, respectively, for BusMasterCommonBuffer[64]
-// operations, without dynamic memory allocation or release.
-//
-// Both COMMON_BUFFER_HEADER and COMMON_BUFFER_HEADER.StashBuffer are allocated
-// by AllocateBuffer() and released by FreeBuffer().
-//
-#pragma pack (1)
-typedef struct {
- UINT64 Signature;
-
- //
- // Always allocated from EfiBootServicesData type memory, and always
- // encrypted.
- //
- VOID *StashBuffer;
-
- //
- // Followed by the actual common buffer, starting at the next page.
- //
-} COMMON_BUFFER_HEADER;
-#pragma pack ()
-
/**
Provides the controller-specific addresses required to access system memory
from a DMA bus master. On SEV/TDX guest, the DMA operations must be performed on
@@ -139,6 +107,8 @@ IoMmuMap (
return EFI_INVALID_PARAMETER;
}
+ Status = EFI_SUCCESS;
+
//
// Allocate a MAP_INFO structure to remember the mapping when Unmap() is
// called later.
@@ -153,11 +123,12 @@ IoMmuMap (
// Initialize the MAP_INFO structure, except the PlainTextAddress field
//
ZeroMem (&MapInfo->Link, sizeof MapInfo->Link);
- MapInfo->Signature = MAP_INFO_SIG;
- MapInfo->Operation = Operation;
- MapInfo->NumberOfBytes = *NumberOfBytes;
- MapInfo->NumberOfPages = EFI_SIZE_TO_PAGES (MapInfo->NumberOfBytes);
- MapInfo->CryptedAddress = (UINTN)HostAddress;
+ MapInfo->Signature = MAP_INFO_SIG;
+ MapInfo->Operation = Operation;
+ MapInfo->NumberOfBytes = *NumberOfBytes;
+ MapInfo->NumberOfPages = EFI_SIZE_TO_PAGES (MapInfo->NumberOfBytes);
+ MapInfo->CryptedAddress = (UINTN)HostAddress;
+ MapInfo->ReservedMemBitmap = 0;
//
// In the switch statement below, we point "MapInfo->PlainTextAddress" to the
@@ -185,12 +156,11 @@ IoMmuMap (
//
// Allocate the implicit plaintext bounce buffer.
//
- Status = gBS->AllocatePages (
- AllocateType,
- EfiBootServicesData,
- MapInfo->NumberOfPages,
- &MapInfo->PlainTextAddress
- );
+ Status = IoMmuAllocateBounceBuffer (
+ AllocateType,
+ EfiBootServicesData,
+ MapInfo
+ );
if (EFI_ERROR (Status)) {
goto FreeMapInfo;
}
@@ -241,7 +211,8 @@ IoMmuMap (
// Point "DecryptionSource" to the stash buffer so that we decrypt
// it to the original location, after the switch statement.
//
- DecryptionSource = CommonBufferHeader->StashBuffer;
+ DecryptionSource = CommonBufferHeader->StashBuffer;
+ MapInfo->ReservedMemBitmap = CommonBufferHeader->ReservedMemBitmap;
break;
default:
@@ -264,12 +235,16 @@ IoMmuMap (
} else if (CC_GUEST_IS_TDX (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
//
// Set the memory shared bit.
+ // If MapInfo->ReservedMemBitmap is 0, it means the bounce buffer is not allocated
+ // from the pre-allocated shared memory, so it must be converted to shared memory here.
//
- Status = MemEncryptTdxSetPageSharedBit (
- 0,
- MapInfo->PlainTextAddress,
- MapInfo->NumberOfPages
- );
+ if (MapInfo->ReservedMemBitmap == 0) {
+ Status = MemEncryptTdxSetPageSharedBit (
+ 0,
+ MapInfo->PlainTextAddress,
+ MapInfo->NumberOfPages
+ );
+ }
} else {
ASSERT (FALSE);
}
@@ -311,12 +286,13 @@ IoMmuMap (
DEBUG ((
DEBUG_VERBOSE,
- "%a: Mapping=0x%p Device(PlainText)=0x%Lx Crypted=0x%Lx Pages=0x%Lx\n",
+ "%a: Mapping=0x%p Device(PlainText)=0x%Lx Crypted=0x%Lx Pages=0x%Lx, ReservedMemBitmap=0x%Lx\n",
__FUNCTION__,
MapInfo,
MapInfo->PlainTextAddress,
MapInfo->CryptedAddress,
- (UINT64)MapInfo->NumberOfPages
+ (UINT64)MapInfo->NumberOfPages,
+ MapInfo->ReservedMemBitmap
));
return EFI_SUCCESS;
@@ -435,11 +411,13 @@ IoMmuUnmapWorker (
// Restore the memory shared bit mask on the area we used to hold the
// plaintext.
//
- Status = MemEncryptTdxClearPageSharedBit (
- 0,
- MapInfo->PlainTextAddress,
- MapInfo->NumberOfPages
- );
+ if (MapInfo->ReservedMemBitmap == 0) {
+ Status = MemEncryptTdxClearPageSharedBit (
+ 0,
+ MapInfo->PlainTextAddress,
+ MapInfo->NumberOfPages
+ );
+ }
} else {
ASSERT (FALSE);
}
@@ -470,8 +448,9 @@ IoMmuUnmapWorker (
(VOID *)(UINTN)MapInfo->PlainTextAddress,
EFI_PAGES_TO_SIZE (MapInfo->NumberOfPages)
);
+
if (!MemoryMapLocked) {
- gBS->FreePages (MapInfo->PlainTextAddress, MapInfo->NumberOfPages);
+ IoMmuFreeBounceBuffer (MapInfo);
}
}
@@ -551,6 +530,7 @@ IoMmuAllocateBuffer (
VOID *StashBuffer;
UINTN CommonBufferPages;
COMMON_BUFFER_HEADER *CommonBufferHeader;
+ UINT32 ReservedMemBitmap;
DEBUG ((
DEBUG_VERBOSE,
@@ -620,12 +600,13 @@ IoMmuAllocateBuffer (
PhysicalAddress = SIZE_4GB - 1;
}
- Status = gBS->AllocatePages (
- AllocateMaxAddress,
- MemoryType,
- CommonBufferPages,
- &PhysicalAddress
- );
+ Status = IoMmuAllocateCommonBuffer (
+ MemoryType,
+ CommonBufferPages,
+ &PhysicalAddress,
+ &ReservedMemBitmap
+ );
+
if (EFI_ERROR (Status)) {
goto FreeStashBuffer;
}
@@ -633,8 +614,9 @@ IoMmuAllocateBuffer (
CommonBufferHeader = (VOID *)(UINTN)PhysicalAddress;
PhysicalAddress += EFI_PAGE_SIZE;
- CommonBufferHeader->Signature = COMMON_BUFFER_SIG;
- CommonBufferHeader->StashBuffer = StashBuffer;
+ CommonBufferHeader->Signature = COMMON_BUFFER_SIG;
+ CommonBufferHeader->StashBuffer = StashBuffer;
+ CommonBufferHeader->ReservedMemBitmap = ReservedMemBitmap;
*HostAddress = (VOID *)(UINTN)PhysicalAddress;
@@ -707,7 +689,7 @@ IoMmuFreeBuffer (
// Release the common buffer itself. Unmap() has re-encrypted it in-place, so
// no need to zero it.
//
- return gBS->FreePages ((UINTN)CommonBufferHeader, CommonBufferPages);
+ return IoMmuFreeCommonBuffer (CommonBufferHeader, CommonBufferPages);
}
/**
@@ -878,6 +860,11 @@ IoMmuUnmapAllMappings (
TRUE // MemoryMapLocked
);
}
+
+ //
+ // Release the reserved shared memory as well.
+ //
+ IoMmuReleaseReservedSharedMem (TRUE);
}
/**
@@ -936,6 +923,19 @@ InstallIoMmuProtocol (
goto CloseExitBootEvent;
}
+ //
+ // Currently only Tdx guest support Reserved shared memory for DMA operation.
+ //
+ if (CC_GUEST_IS_TDX (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ mReservedSharedMemSupported = TRUE;
+ Status = IoMmuInitReservedSharedMem ();
+ if (EFI_ERROR (Status)) {
+ mReservedSharedMemSupported = FALSE;
+ } else {
+ DEBUG ((DEBUG_INFO, "%a: Feature of reserved memory for DMA is supported.\n", __FUNCTION__));
+ }
+ }
+
return EFI_SUCCESS;
CloseExitBootEvent:
diff --git a/OvmfPkg/IoMmuDxe/IoMmuBuffer.c b/OvmfPkg/IoMmuDxe/IoMmuBuffer.c
new file mode 100644
index 000000000000..2675223aa0df
--- /dev/null
+++ b/OvmfPkg/IoMmuDxe/IoMmuBuffer.c
@@ -0,0 +1,465 @@
+/** @file
+
+ Copyright (c) 2022, Intel Corporation. All rights reserved.<BR>
+
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+#include <Library/BaseLib.h>
+#include <Library/BaseMemoryLib.h>
+#include <Library/DebugLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Library/MemEncryptTdxLib.h>
+#include <Library/UefiBootServicesTableLib.h>
+#include "IoMmuInternal.h"
+
+extern BOOLEAN mReservedSharedMemSupported;
+
+#define SIZE_OF_MEM_RANGE(MemRange) (MemRange->HeaderSize + MemRange->DataSize)
+
+#define RESERVED_MEM_BITMAP_4K_MASK 0xf
+#define RESERVED_MEM_BITMAP_32K_MASK 0xff0
+#define RESERVED_MEM_BITMAP_128K_MASK 0x3000
+#define RESERVED_MEM_BITMAP_1M_MASK 0x40000
+#define RESERVED_MEM_BITMAP_2M_MASK 0x180000
+#define RESERVED_MEM_BITMAP_MASK 0x1fffff
+
+/**
+ * mReservedMemRanges describes the layout of the reserved memory.
+ * The reserved memory consists of disfferent size of memory region.
+ * The pieces of memory with the same size are managed by one entry
+ * in the mReservedMemRanges. All the pieces of memories are managed by
+ * mReservedMemBitmap which is a UINT32. It means it can manage at most
+ * 32 pieces of memory. Because of the layout of CommonBuffer
+ * (1-page header + n-page data), a piece of reserved memory consists of
+ * 2 parts: Header + Data.
+ *
+ * So put all these together, mReservedMemRanges and mReservedMemBitmap
+ * are designed to manage the reserved memory.
+ *
+ * Use the second entry of mReservedMemRanges as an example.
+ * { RESERVED_MEM_BITMAP_32K_MASK, 4, 8, SIZE_32KB, SIZE_4KB, 0 },
+ * - RESERVED_MEM_BITMAP_32K_MASK is 0xff0. It means bit4-11 in mReservedMemBitmap
+ * is reserved for 32K size memory.
+ * - 4 is the shift of mReservedMemBitmap.
+ * - 8 means there are 8 pieces of 32K size memory.
+ * - SIZE_32KB indicates the size of Data part.
+ * - SIZE_4KB is the size of Header part.
+ * - 0 is the start address of this memory range which will be populated when
+ * the reserved memory is initialized.
+ *
+ * The size and count of the memory region are derived from the experience. For
+ * a typical grub boot, there are about 5100 IoMmu/DMA operation. Most of these
+ * DMA operation require the memory with size less than 32K (~5080). But we find
+ * in grub boot there may be 2 DMA operation which require for the memory larger
+ * than 1M. And these 2 DMA operation occur concurrently. So we reserve 2 pieces
+ * of memory with size of SIZE_2MB. This is for the best boot performance.
+ *
+ * If all the reserved memory are exausted, then it will fall back to the legacy
+ * memory allocation as before.
+ */
+STATIC IOMMU_RESERVED_MEM_RANGE mReservedMemRanges[] = {
+ { RESERVED_MEM_BITMAP_4K_MASK, 0, 4, SIZE_4KB, SIZE_4KB, 0 },
+ { RESERVED_MEM_BITMAP_32K_MASK, 4, 8, SIZE_32KB, SIZE_4KB, 0 },
+ { RESERVED_MEM_BITMAP_128K_MASK, 12, 2, SIZE_128KB, SIZE_4KB, 0 },
+ { RESERVED_MEM_BITMAP_1M_MASK, 14, 1, SIZE_1MB, SIZE_4KB, 0 },
+ { RESERVED_MEM_BITMAP_2M_MASK, 15, 2, SIZE_2MB, SIZE_4KB, 0 },
+};
+
+//
+// Bitmap of the allocation of reserved memory.
+//
+STATIC UINT32 mReservedMemBitmap = 0;
+
+//
+// Start address of the reserved memory region.
+//
+STATIC EFI_PHYSICAL_ADDRESS mReservedSharedMemAddress = 0;
+
+//
+// Total size of the reserved memory region.
+//
+STATIC UINT32 mReservedSharedMemSize = 0;
+
+/**
+ * Calculate the size of reserved memory.
+ *
+ * @retval UINT32 Size of the reserved memory
+ */
+STATIC
+UINT32
+CalcuateReservedMemSize (
+ VOID
+ )
+{
+ UINT32 Index;
+ IOMMU_RESERVED_MEM_RANGE *MemRange;
+
+ if (mReservedSharedMemSize != 0) {
+ return mReservedSharedMemSize;
+ }
+
+ for (Index = 0; Index < ARRAY_SIZE (mReservedMemRanges); Index++) {
+ MemRange = &mReservedMemRanges[Index];
+ mReservedSharedMemSize += (SIZE_OF_MEM_RANGE (MemRange) * MemRange->Slots);
+ }
+
+ return mReservedSharedMemSize;
+}
+
+/**
+ * Allocate a memory region and convert it to be shared. This memory region will be
+ * used in the DMA operation.
+ *
+ * The pre-alloc memory contains pieces of memory regions with different size. The
+ * allocation of the shared memory regions are indicated by a 32-bit bitmap (mReservedMemBitmap).
+ *
+ * The memory regions are consumed by IoMmuAllocateBuffer (in which CommonBuffer is allocated) and
+ * IoMmuMap (in which bounce buffer is allocated).
+ *
+ * The CommonBuffer contains 2 parts, one page for CommonBufferHeader which is private memory,
+ * the other part is shared memory. So the layout of a piece of memory region after initialization
+ * looks like:
+ *
+ * |------------|----------------------------|
+ * | Header | Data | <-- a piece of pre-alloc memory region
+ * | 4k, private| 4k/32k/128k/etc, shared |
+ * |-----------------------------------------|
+ *
+ * @retval EFI_SUCCESS Successfully initialize the reserved memory.
+ * @retval EFI_UNSUPPORTED This feature is not supported.
+ */
+EFI_STATUS
+IoMmuInitReservedSharedMem (
+ VOID
+ )
+{
+ EFI_STATUS Status;
+ UINT32 Index1, Index2;
+ UINTN TotalPages;
+ IOMMU_RESERVED_MEM_RANGE *MemRange;
+ EFI_PHYSICAL_ADDRESS PhysicalAddress;
+
+ if (!mReservedSharedMemSupported) {
+ return EFI_UNSUPPORTED;
+ }
+
+ TotalPages = EFI_SIZE_TO_PAGES (CalcuateReservedMemSize ());
+
+ PhysicalAddress = (EFI_PHYSICAL_ADDRESS)(UINTN)AllocatePages (TotalPages);
+ DEBUG ((
+ DEBUG_VERBOSE,
+ "%a: ReservedMem (%d pages) address = 0x%llx\n",
+ __FUNCTION__,
+ TotalPages,
+ PhysicalAddress
+ ));
+
+ mReservedMemBitmap = 0;
+ mReservedSharedMemAddress = PhysicalAddress;
+
+ for (Index1 = 0; Index1 < ARRAY_SIZE (mReservedMemRanges); Index1++) {
+ MemRange = &mReservedMemRanges[Index1];
+ MemRange->StartAddressOfMemRange = PhysicalAddress;
+
+ for (Index2 = 0; Index2 < MemRange->Slots; Index2++) {
+ Status = MemEncryptTdxSetPageSharedBit (
+ 0,
+ (UINT64)(UINTN)(MemRange->StartAddressOfMemRange + Index2 * SIZE_OF_MEM_RANGE (MemRange) + MemRange->HeaderSize),
+ EFI_SIZE_TO_PAGES (MemRange->DataSize)
+ );
+ ASSERT (!EFI_ERROR (Status));
+ }
+
+ PhysicalAddress += (MemRange->Slots * SIZE_OF_MEM_RANGE (MemRange));
+ }
+
+ return EFI_SUCCESS;
+}
+
+/**
+ * Release the pre-alloc shared memory.
+ *
+ * @retval EFI_SUCCESS Successfully release the shared memory
+ */
+EFI_STATUS
+IoMmuReleaseReservedSharedMem (
+ BOOLEAN MemoryMapLocked
+ )
+{
+ EFI_STATUS Status;
+ UINT32 Index1, Index2;
+ IOMMU_RESERVED_MEM_RANGE *MemRange;
+
+ if (!mReservedSharedMemSupported) {
+ return EFI_SUCCESS;
+ }
+
+ for (Index1 = 0; Index1 < ARRAY_SIZE (mReservedMemRanges); Index1++) {
+ MemRange = &mReservedMemRanges[Index1];
+ for (Index2 = 0; Index2 < MemRange->Slots; Index2++) {
+ Status = MemEncryptTdxClearPageSharedBit (
+ 0,
+ (UINT64)(UINTN)(MemRange->StartAddressOfMemRange + Index2 * SIZE_OF_MEM_RANGE (MemRange) + MemRange->HeaderSize),
+ EFI_SIZE_TO_PAGES (MemRange->DataSize)
+ );
+ ASSERT (!EFI_ERROR (Status));
+ }
+ }
+
+ if (!MemoryMapLocked) {
+ FreePages ((VOID *)(UINTN)mReservedSharedMemAddress, EFI_SIZE_TO_PAGES (CalcuateReservedMemSize ()));
+ mReservedSharedMemAddress = 0;
+ mReservedMemBitmap = 0;
+ }
+
+ mReservedSharedMemSupported = FALSE;
+
+ return EFI_SUCCESS;
+}
+
+/**
+ * Allocate from the reserved memory pool.
+ * If the reserved shared memory is exausted or there is no suitalbe size, it turns
+ * to the LegacyAllocateBuffer.
+ *
+ * @param Type Allocate type
+ * @param MemoryType The memory type to be allocated
+ * @param Pages Pages to be allocated.
+ * @param ReservedMemBitmap Bitmap of the allocated memory region
+ * @param PhysicalAddress Pointer to the data part of allocated memory region
+ *
+ * @retval EFI_SUCCESS Successfully allocate the buffer
+ * @retval Other As the error code indicates
+ */
+STATIC
+EFI_STATUS
+InternalAllocateBuffer (
+ IN EFI_ALLOCATE_TYPE Type,
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN UINTN Pages,
+ IN OUT UINT32 *ReservedMemBitmap,
+ IN OUT EFI_PHYSICAL_ADDRESS *PhysicalAddress
+ )
+{
+ UINT32 MemBitmap;
+ UINT8 Index;
+ IOMMU_RESERVED_MEM_RANGE *MemRange;
+ UINTN PagesOfLastMemRange;
+
+ *ReservedMemBitmap = 0;
+
+ if (Pages == 0) {
+ ASSERT (FALSE);
+ return EFI_INVALID_PARAMETER;
+ }
+
+ if (!mReservedSharedMemSupported) {
+ goto LegacyAllocateBuffer;
+ }
+
+ if (mReservedSharedMemAddress == 0) {
+ goto LegacyAllocateBuffer;
+ }
+
+ PagesOfLastMemRange = 0;
+
+ for (Index = 0; Index < ARRAY_SIZE (mReservedMemRanges); Index++) {
+ if ((Pages > PagesOfLastMemRange) && (Pages <= EFI_SIZE_TO_PAGES (mReservedMemRanges[Index].DataSize))) {
+ break;
+ }
+
+ PagesOfLastMemRange = EFI_SIZE_TO_PAGES (mReservedMemRanges[Index].DataSize);
+ }
+
+ if (Index == ARRAY_SIZE (mReservedMemRanges)) {
+ // There is no suitable size of reserved memory. Turn to legacy allocate.
+ goto LegacyAllocateBuffer;
+ }
+
+ MemRange = &mReservedMemRanges[Index];
+
+ if ((mReservedMemBitmap & MemRange->BitmapMask) == MemRange->BitmapMask) {
+ // The reserved memory is exausted. Turn to legacy allocate.
+ goto LegacyAllocateBuffer;
+ }
+
+ MemBitmap = (mReservedMemBitmap & MemRange->BitmapMask) >> MemRange->Shift;
+
+ for (Index = 0; Index < MemRange->Slots; Index++) {
+ if ((MemBitmap & (UINT8)(1<<Index)) == 0) {
+ break;
+ }
+ }
+
+ ASSERT (Index != MemRange->Slots);
+
+ *PhysicalAddress = MemRange->StartAddressOfMemRange + Index * SIZE_OF_MEM_RANGE (MemRange) + MemRange->HeaderSize;
+ *ReservedMemBitmap = (UINT32)(1 << (Index + MemRange->Shift));
+
+ DEBUG ((
+ DEBUG_VERBOSE,
+ "%a: range-size: %lx, start-address=0x%llx, pages=0x%llx, bits=0x%lx, bitmap: %lx => %lx\n",
+ __FUNCTION__,
+ MemRange->DataSize,
+ *PhysicalAddress,
+ Pages,
+ *ReservedMemBitmap,
+ mReservedMemBitmap,
+ mReservedMemBitmap | *ReservedMemBitmap
+ ));
+
+ return EFI_SUCCESS;
+
+LegacyAllocateBuffer:
+
+ *ReservedMemBitmap = 0;
+ return gBS->AllocatePages (Type, MemoryType, Pages, PhysicalAddress);
+}
+
+/**
+ * Allocate reserved shared memory for bounce buffer.
+ *
+ * @param Type Allocate type
+ * @param MemoryType The memory type to be allocated
+ * @param MapInfo Pointer to the MAP_INFO
+ *
+ * @retval EFI_SUCCESS Successfully allocate the bounce buffer
+ * @retval Other As the error code indicates
+
+ */
+EFI_STATUS
+IoMmuAllocateBounceBuffer (
+ IN EFI_ALLOCATE_TYPE Type,
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN OUT MAP_INFO *MapInfo
+ )
+{
+ EFI_STATUS Status;
+ UINT32 ReservedMemBitmap;
+
+ ReservedMemBitmap = 0;
+ Status = InternalAllocateBuffer (
+ Type,
+ MemoryType,
+ MapInfo->NumberOfPages,
+ &ReservedMemBitmap,
+ &MapInfo->PlainTextAddress
+ );
+ MapInfo->ReservedMemBitmap = ReservedMemBitmap;
+ mReservedMemBitmap |= ReservedMemBitmap;
+
+ ASSERT (Status == EFI_SUCCESS);
+
+ return Status;
+}
+
+/**
+ * Free the bounce buffer allocated in IoMmuAllocateBounceBuffer.
+ *
+ * @param MapInfo Pointer to the MAP_INFO
+ * @return EFI_SUCCESS Successfully free the bounce buffer.
+ */
+EFI_STATUS
+IoMmuFreeBounceBuffer (
+ IN OUT MAP_INFO *MapInfo
+ )
+{
+ if (MapInfo->ReservedMemBitmap == 0) {
+ gBS->FreePages (MapInfo->PlainTextAddress, MapInfo->NumberOfPages);
+ } else {
+ DEBUG ((
+ DEBUG_VERBOSE,
+ "%a: PlainTextAddress=0x%Lx, bits=0x%Lx, bitmap: %Lx => %Lx\n",
+ __FUNCTION__,
+ MapInfo->PlainTextAddress,
+ MapInfo->ReservedMemBitmap,
+ mReservedMemBitmap,
+ mReservedMemBitmap & ((UINT32)(~MapInfo->ReservedMemBitmap))
+ ));
+ MapInfo->PlainTextAddress = 0;
+ mReservedMemBitmap &= (UINT32)(~MapInfo->ReservedMemBitmap);
+ MapInfo->ReservedMemBitmap = 0;
+ }
+
+ return EFI_SUCCESS;
+}
+
+/**
+ * Allocate CommonBuffer from pre-allocated shared memory.
+ *
+ * @param MemoryType Memory type
+ * @param CommonBufferPages Pages of CommonBuffer
+ * @param PhysicalAddress Allocated physical address
+ * @param ReservedMemBitmap Bitmap which indicates the allocation of reserved memory
+ *
+ * @retval EFI_SUCCESS Successfully allocate the common buffer
+ * @retval Other As the error code indicates
+ */
+EFI_STATUS
+IoMmuAllocateCommonBuffer (
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN UINTN CommonBufferPages,
+ OUT EFI_PHYSICAL_ADDRESS *PhysicalAddress,
+ OUT UINT32 *ReservedMemBitmap
+ )
+{
+ EFI_STATUS Status;
+
+ Status = InternalAllocateBuffer (
+ AllocateMaxAddress,
+ MemoryType,
+ CommonBufferPages,
+ ReservedMemBitmap,
+ PhysicalAddress
+ );
+ ASSERT (Status == EFI_SUCCESS);
+
+ mReservedMemBitmap |= *ReservedMemBitmap;
+
+ if (*ReservedMemBitmap != 0) {
+ *PhysicalAddress -= SIZE_4KB;
+ }
+
+ return Status;
+}
+
+/**
+ * Free CommonBuffer which is allocated by IoMmuAllocateCommonBuffer().
+ *
+ * @param CommonBufferHeader Pointer to the CommonBufferHeader
+ * @param CommonBufferPages Pages of CommonBuffer
+ *
+ * @retval EFI_SUCCESS Successfully free the common buffer
+ * @retval Other As the error code indicates
+ */
+EFI_STATUS
+IoMmuFreeCommonBuffer (
+ IN COMMON_BUFFER_HEADER *CommonBufferHeader,
+ IN UINTN CommonBufferPages
+ )
+{
+ if (!mReservedSharedMemSupported) {
+ goto LegacyFreeCommonBuffer;
+ }
+
+ if (CommonBufferHeader->ReservedMemBitmap == 0) {
+ goto LegacyFreeCommonBuffer;
+ }
+
+ DEBUG ((
+ DEBUG_VERBOSE,
+ "%a: CommonBuffer=0x%Lx, bits=0x%Lx, bitmap: %Lx => %Lx\n",
+ __FUNCTION__,
+ (UINT64)(UINTN)CommonBufferHeader + SIZE_4KB,
+ CommonBufferHeader->ReservedMemBitmap,
+ mReservedMemBitmap,
+ mReservedMemBitmap & ((UINT32)(~CommonBufferHeader->ReservedMemBitmap))
+ ));
+
+ mReservedMemBitmap &= (UINT32)(~CommonBufferHeader->ReservedMemBitmap);
+ return EFI_SUCCESS;
+
+LegacyFreeCommonBuffer:
+ return gBS->FreePages ((UINTN)CommonBufferHeader, CommonBufferPages);
+}
diff --git a/OvmfPkg/IoMmuDxe/IoMmuDxe.inf b/OvmfPkg/IoMmuDxe/IoMmuDxe.inf
index e10be1dcff49..2192145ea661 100644
--- a/OvmfPkg/IoMmuDxe/IoMmuDxe.inf
+++ b/OvmfPkg/IoMmuDxe/IoMmuDxe.inf
@@ -21,6 +21,7 @@
AmdSevIoMmu.c
AmdSevIoMmu.h
IoMmuDxe.c
+ IoMmuBuffer.c
[Packages]
MdePkg/MdePkg.dec
diff --git a/OvmfPkg/IoMmuDxe/IoMmuInternal.h b/OvmfPkg/IoMmuDxe/IoMmuInternal.h
new file mode 100644
index 000000000000..936c35aa536c
--- /dev/null
+++ b/OvmfPkg/IoMmuDxe/IoMmuInternal.h
@@ -0,0 +1,179 @@
+/** @file
+
+ Copyright (c) 2022, Intel Corporation. All rights reserved.<BR>
+
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#ifndef IOMMU_INTERNAL_H_
+#define IOMMU_INTERNAL_H_
+
+#include <Base.h>
+#include <Protocol/IoMmu.h>
+#include <Uefi/UefiBaseType.h>
+#include <Uefi/UefiSpec.h>
+
+#define MAP_INFO_SIG SIGNATURE_64 ('M', 'A', 'P', '_', 'I', 'N', 'F', 'O')
+
+typedef struct {
+ UINT64 Signature;
+ LIST_ENTRY Link;
+ EDKII_IOMMU_OPERATION Operation;
+ UINTN NumberOfBytes;
+ UINTN NumberOfPages;
+ EFI_PHYSICAL_ADDRESS CryptedAddress;
+ EFI_PHYSICAL_ADDRESS PlainTextAddress;
+ UINT32 ReservedMemBitmap;
+} MAP_INFO;
+
+#define COMMON_BUFFER_SIG SIGNATURE_64 ('C', 'M', 'N', 'B', 'U', 'F', 'F', 'R')
+
+#pragma pack (1)
+//
+// The following structure enables Map() and Unmap() to perform in-place
+// decryption and encryption, respectively, for BusMasterCommonBuffer[64]
+// operations, without dynamic memory allocation or release.
+//
+// Both COMMON_BUFFER_HEADER and COMMON_BUFFER_HEADER.StashBuffer are allocated
+// by AllocateBuffer() and released by FreeBuffer().
+//
+typedef struct {
+ UINT64 Signature;
+
+ //
+ // Always allocated from EfiBootServicesData type memory, and always
+ // encrypted.
+ //
+ VOID *StashBuffer;
+
+ //
+ // Bitmap of reserved memory
+ //
+ UINT32 ReservedMemBitmap;
+
+ //
+ // Followed by the actual common buffer, starting at the next page.
+ //
+} COMMON_BUFFER_HEADER;
+
+//
+// This data structure defines a memory range in the reserved memory region.
+// Please refer to IoMmuInitReservedSharedMem() for detailed information.
+//
+// The memory region looks like:
+// |------------|----------------------------|
+// | Header | Data |
+// | 4k, private| 4k/32k/128k/etc, shared |
+// |-----------------------------------------|
+//
+typedef struct {
+ UINT32 BitmapMask;
+ UINT32 Shift;
+ UINT32 Slots;
+ UINT32 DataSize;
+ UINT32 HeaderSize;
+ EFI_PHYSICAL_ADDRESS StartAddressOfMemRange;
+} IOMMU_RESERVED_MEM_RANGE;
+#pragma pack()
+
+/**
+ * Allocate a memory region and convert it to be shared. This memory region will be
+ * used in the DMA operation.
+ *
+ * The pre-alloc memory contains pieces of memory regions with different size. The
+ * allocation of the shared memory regions are indicated by a 32-bit bitmap (mReservedMemBitmap).
+ *
+ * The memory regions are consumed by IoMmuAllocateBuffer (in which CommonBuffer is allocated) and
+ * IoMmuMap (in which bounce buffer is allocated).
+ *
+ * The CommonBuffer contains 2 parts, one page for CommonBufferHeader which is private memory,
+ * the other part is shared memory. So the layout of a piece of memory region after initialization
+ * looks like:
+ *
+ * |------------|----------------------------|
+ * | Header | Data | <-- a piece of pre-alloc memory region
+ * | 4k, private| 4k/32k/128k/etc, shared |
+ * |-----------------------------------------|
+ *
+ * @retval EFI_SUCCESS Successfully initialize the reserved memory.
+ * @retval EFI_UNSUPPORTED This feature is not supported.
+ */
+EFI_STATUS
+IoMmuInitReservedSharedMem (
+ VOID
+ );
+
+/**
+ * Release the pre-alloc shared memory.
+ *
+ * @retval EFI_SUCCESS Successfully release the shared memory
+ */
+EFI_STATUS
+IoMmuReleaseReservedSharedMem (
+ BOOLEAN MemoryMapLocked
+ );
+
+/**
+ * Allocate reserved shared memory for bounce buffer.
+ *
+ * @param Type Allocate type
+ * @param MemoryType The memory type to be allocated
+ * @param MapInfo Pointer to the MAP_INFO
+ *
+ * @retval EFI_SUCCESS Successfully allocate the bounce buffer
+ * @retval Other As the error code indicates
+ */
+EFI_STATUS
+IoMmuAllocateBounceBuffer (
+ IN EFI_ALLOCATE_TYPE Type,
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN OUT MAP_INFO *MapInfo
+ );
+
+/**
+ * Free the bounce buffer allocated in IoMmuAllocateBounceBuffer.
+ *
+ * @param MapInfo Pointer to the MAP_INFO
+ * @return EFI_SUCCESS Successfully free the bounce buffer.
+ */
+EFI_STATUS
+IoMmuFreeBounceBuffer (
+ IN OUT MAP_INFO *MapInfo
+ );
+
+/**
+ * Allocate CommonBuffer from pre-allocated shared memory.
+ *
+ * @param MemoryType Memory type
+ * @param CommonBufferPages Pages of CommonBuffer
+ * @param PhysicalAddress Allocated physical address
+ * @param ReservedMemBitmap Bitmap which indicates the allocation of reserved memory
+ *
+ * @retval EFI_SUCCESS Successfully allocate the common buffer
+ * @retval Other As the error code indicates
+ */
+EFI_STATUS
+IoMmuAllocateCommonBuffer (
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN UINTN CommonBufferPages,
+ OUT EFI_PHYSICAL_ADDRESS *PhysicalAddress,
+ OUT UINT32 *ReservedMemBitmap
+ );
+
+/**
+ * Free CommonBuffer which is allocated by IoMmuAllocateCommonBuffer().
+ *
+ * @param CommonBufferHeader Pointer to the CommonBufferHeader
+ * @param CommonBufferPages Pages of CommonBuffer
+ *
+ * @retval EFI_SUCCESS Successfully free the common buffer
+ * @retval Other As the error code indicates
+ */
+EFI_STATUS
+IoMmuFreeCommonBuffer (
+ IN COMMON_BUFFER_HEADER *CommonBufferHeader,
+ IN UINTN CommonBufferPages
+ );
+
+#endif
--
2.29.2.windows.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH V3 2/4] OvmfPkg/IoMmuDxe: Rename AmdSevIoMmu to CcIoMmu
2022-12-15 5:18 [PATCH V3 0/4] Reserve shared memory for DMA operation Min Xu
2022-12-15 5:18 ` [PATCH V3 1/4] OvmfPkg/IoMmuDxe: Reserve shared memory region " Min Xu
@ 2022-12-15 5:18 ` Min Xu
2022-12-15 5:18 ` [PATCH V3 3/4] OvmfPkg/IoMmuDxe: Add SEV support for reserved shared memory Min Xu
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Min Xu @ 2022-12-15 5:18 UTC (permalink / raw)
To: devel
Cc: Min M Xu, Erdem Aktas, James Bottomley, Jiewen Yao, Tom Lendacky,
Gerd Hoffmann, Jiewen Yao
From: Min M Xu <min.m.xu@intel.com>
BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4171
IoMmuDxe once was designed to support DMA operation when SEV is enabled.
After TDX is enabled in IoMmuDxe, some files' name in IoMmuDxe need to
be more general. So this patch rename:
AmdSevIoMmu.h -> CcIoMmu.h
AmdSevIoMmu.c -> CcIoMmu.c
Accordingly there are some udates in IoMmuDxe.c and IoMmuDxe.inf.
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
---
OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} | 2 +-
OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} | 0
OvmfPkg/IoMmuDxe/IoMmuDxe.c | 2 +-
OvmfPkg/IoMmuDxe/IoMmuDxe.inf | 5 ++---
4 files changed, 4 insertions(+), 5 deletions(-)
rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} (96%)
rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} (100%)
diff --git a/OvmfPkg/IoMmuDxe/AmdSevIoMmu.c b/OvmfPkg/IoMmuDxe/CcIoMmu.c
similarity index 96%
rename from OvmfPkg/IoMmuDxe/AmdSevIoMmu.c
rename to OvmfPkg/IoMmuDxe/CcIoMmu.c
index 77e46bbf4a60..1479af469881 100644
--- a/OvmfPkg/IoMmuDxe/AmdSevIoMmu.c
+++ b/OvmfPkg/IoMmuDxe/CcIoMmu.c
@@ -14,7 +14,7 @@
#include <Library/PcdLib.h>
#include <ConfidentialComputingGuestAttr.h>
-#include "AmdSevIoMmu.h"
+#include "CcIoMmu.h"
#include "IoMmuInternal.h"
//
diff --git a/OvmfPkg/IoMmuDxe/AmdSevIoMmu.h b/OvmfPkg/IoMmuDxe/CcIoMmu.h
similarity index 100%
rename from OvmfPkg/IoMmuDxe/AmdSevIoMmu.h
rename to OvmfPkg/IoMmuDxe/CcIoMmu.h
diff --git a/OvmfPkg/IoMmuDxe/IoMmuDxe.c b/OvmfPkg/IoMmuDxe/IoMmuDxe.c
index 86777dd05c26..aab6d8b90687 100644
--- a/OvmfPkg/IoMmuDxe/IoMmuDxe.c
+++ b/OvmfPkg/IoMmuDxe/IoMmuDxe.c
@@ -9,7 +9,7 @@
**/
-#include "AmdSevIoMmu.h"
+#include "CcIoMmu.h"
EFI_STATUS
EFIAPI
diff --git a/OvmfPkg/IoMmuDxe/IoMmuDxe.inf b/OvmfPkg/IoMmuDxe/IoMmuDxe.inf
index 2192145ea661..17fca5285692 100644
--- a/OvmfPkg/IoMmuDxe/IoMmuDxe.inf
+++ b/OvmfPkg/IoMmuDxe/IoMmuDxe.inf
@@ -18,8 +18,8 @@
ENTRY_POINT = IoMmuDxeEntryPoint
[Sources]
- AmdSevIoMmu.c
- AmdSevIoMmu.h
+ CcIoMmu.c
+ CcIoMmu.h
IoMmuDxe.c
IoMmuBuffer.c
@@ -27,7 +27,6 @@
MdePkg/MdePkg.dec
MdeModulePkg/MdeModulePkg.dec
OvmfPkg/OvmfPkg.dec
-# UefiCpuPkg/UefiCpuPkg.dec
[LibraryClasses]
BaseLib
--
2.29.2.windows.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH V3 3/4] OvmfPkg/IoMmuDxe: Add SEV support for reserved shared memory
2022-12-15 5:18 [PATCH V3 0/4] Reserve shared memory for DMA operation Min Xu
2022-12-15 5:18 ` [PATCH V3 1/4] OvmfPkg/IoMmuDxe: Reserve shared memory region " Min Xu
2022-12-15 5:18 ` [PATCH V3 2/4] OvmfPkg/IoMmuDxe: Rename AmdSevIoMmu to CcIoMmu Min Xu
@ 2022-12-15 5:18 ` Min Xu
2022-12-15 5:18 ` [PATCH V3 4/4] Maintainers: Update OvmfPkg/IoMmuDxe Min Xu
2022-12-17 3:11 ` [PATCH V3 0/4] Reserve shared memory for DMA operation Yao, Jiewen
4 siblings, 0 replies; 8+ messages in thread
From: Min Xu @ 2022-12-15 5:18 UTC (permalink / raw)
To: devel
Cc: Tom Lendacky, Erdem Aktas, James Bottomley, Jiewen Yao, Min Xu,
Gerd Hoffmann, Jiewen Yao
From: Tom Lendacky <thomas.lendacky@amd.com>
Add support to use the reserved shared memory within the IoMmu library.
This improves boot times for all SEV guests, with SEV-SNP benefiting the
most as it avoids the page state change call to the hypervisor.
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Min Xu <min.m.xu@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Jiewen Yao <Jiewen.yao@intel.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
OvmfPkg/IoMmuDxe/CcIoMmu.c | 82 +++++++++++++++++-----------------
OvmfPkg/IoMmuDxe/IoMmuBuffer.c | 54 +++++++++++++++++-----
2 files changed, 83 insertions(+), 53 deletions(-)
diff --git a/OvmfPkg/IoMmuDxe/CcIoMmu.c b/OvmfPkg/IoMmuDxe/CcIoMmu.c
index 1479af469881..7c2843cd238a 100644
--- a/OvmfPkg/IoMmuDxe/CcIoMmu.c
+++ b/OvmfPkg/IoMmuDxe/CcIoMmu.c
@@ -223,30 +223,32 @@ IoMmuMap (
goto FreeMapInfo;
}
- if (CC_GUEST_IS_SEV (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ if (MapInfo->ReservedMemBitmap == 0) {
//
- // Clear the memory encryption mask on the plaintext buffer.
- //
- Status = MemEncryptSevClearPageEncMask (
- 0,
- MapInfo->PlainTextAddress,
- MapInfo->NumberOfPages
- );
- } else if (CC_GUEST_IS_TDX (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
- //
- // Set the memory shared bit.
// If MapInfo->ReservedMemBitmap is 0, it means the bounce buffer is not allocated
// from the pre-allocated shared memory, so it must be converted to shared memory here.
//
- if (MapInfo->ReservedMemBitmap == 0) {
+ if (CC_GUEST_IS_SEV (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ //
+ // Clear the memory encryption mask on the plaintext buffer.
+ //
+ Status = MemEncryptSevClearPageEncMask (
+ 0,
+ MapInfo->PlainTextAddress,
+ MapInfo->NumberOfPages
+ );
+ } else if (CC_GUEST_IS_TDX (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ //
+ // Set the memory shared bit.
+ //
Status = MemEncryptTdxSetPageSharedBit (
0,
MapInfo->PlainTextAddress,
MapInfo->NumberOfPages
);
+ } else {
+ ASSERT (FALSE);
}
- } else {
- ASSERT (FALSE);
}
ASSERT_EFI_ERROR (Status);
@@ -396,30 +398,30 @@ IoMmuUnmapWorker (
break;
}
- if (CC_GUEST_IS_SEV (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
- //
- // Restore the memory encryption mask on the area we used to hold the
- // plaintext.
- //
- Status = MemEncryptSevSetPageEncMask (
- 0,
- MapInfo->PlainTextAddress,
- MapInfo->NumberOfPages
- );
- } else if (CC_GUEST_IS_TDX (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
- //
- // Restore the memory shared bit mask on the area we used to hold the
- // plaintext.
- //
- if (MapInfo->ReservedMemBitmap == 0) {
+ if (MapInfo->ReservedMemBitmap == 0) {
+ if (CC_GUEST_IS_SEV (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ //
+ // Restore the memory encryption mask on the area we used to hold the
+ // plaintext.
+ //
+ Status = MemEncryptSevSetPageEncMask (
+ 0,
+ MapInfo->PlainTextAddress,
+ MapInfo->NumberOfPages
+ );
+ } else if (CC_GUEST_IS_TDX (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ //
+ // Restore the memory shared bit mask on the area we used to hold the
+ // plaintext.
+ //
Status = MemEncryptTdxClearPageSharedBit (
0,
MapInfo->PlainTextAddress,
MapInfo->NumberOfPages
);
+ } else {
+ ASSERT (FALSE);
}
- } else {
- ASSERT (FALSE);
}
ASSERT_EFI_ERROR (Status);
@@ -924,16 +926,14 @@ InstallIoMmuProtocol (
}
//
- // Currently only Tdx guest support Reserved shared memory for DMA operation.
+ // For CC guests, use reserved shared memory for DMA operation.
//
- if (CC_GUEST_IS_TDX (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
- mReservedSharedMemSupported = TRUE;
- Status = IoMmuInitReservedSharedMem ();
- if (EFI_ERROR (Status)) {
- mReservedSharedMemSupported = FALSE;
- } else {
- DEBUG ((DEBUG_INFO, "%a: Feature of reserved memory for DMA is supported.\n", __FUNCTION__));
- }
+ mReservedSharedMemSupported = TRUE;
+ Status = IoMmuInitReservedSharedMem ();
+ if (EFI_ERROR (Status)) {
+ mReservedSharedMemSupported = FALSE;
+ } else {
+ DEBUG ((DEBUG_INFO, "%a: Feature of reserved memory for DMA is supported.\n", __FUNCTION__));
}
return EFI_SUCCESS;
diff --git a/OvmfPkg/IoMmuDxe/IoMmuBuffer.c b/OvmfPkg/IoMmuDxe/IoMmuBuffer.c
index 2675223aa0df..83c7c5958340 100644
--- a/OvmfPkg/IoMmuDxe/IoMmuBuffer.c
+++ b/OvmfPkg/IoMmuDxe/IoMmuBuffer.c
@@ -9,7 +9,9 @@
#include <Library/BaseMemoryLib.h>
#include <Library/DebugLib.h>
#include <Library/MemoryAllocationLib.h>
+#include <Library/MemEncryptSevLib.h>
#include <Library/MemEncryptTdxLib.h>
+#include <Library/PcdLib.h>
#include <Library/UefiBootServicesTableLib.h>
#include "IoMmuInternal.h"
@@ -139,6 +141,7 @@ IoMmuInitReservedSharedMem (
UINTN TotalPages;
IOMMU_RESERVED_MEM_RANGE *MemRange;
EFI_PHYSICAL_ADDRESS PhysicalAddress;
+ UINT64 SharedAddress;
if (!mReservedSharedMemSupported) {
return EFI_UNSUPPORTED;
@@ -163,12 +166,25 @@ IoMmuInitReservedSharedMem (
MemRange->StartAddressOfMemRange = PhysicalAddress;
for (Index2 = 0; Index2 < MemRange->Slots; Index2++) {
- Status = MemEncryptTdxSetPageSharedBit (
- 0,
- (UINT64)(UINTN)(MemRange->StartAddressOfMemRange + Index2 * SIZE_OF_MEM_RANGE (MemRange) + MemRange->HeaderSize),
- EFI_SIZE_TO_PAGES (MemRange->DataSize)
- );
- ASSERT (!EFI_ERROR (Status));
+ SharedAddress = (UINT64)(UINTN)(MemRange->StartAddressOfMemRange + Index2 * SIZE_OF_MEM_RANGE (MemRange) + MemRange->HeaderSize);
+
+ if (CC_GUEST_IS_SEV (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ Status = MemEncryptSevClearPageEncMask (
+ 0,
+ SharedAddress,
+ EFI_SIZE_TO_PAGES (MemRange->DataSize)
+ );
+ ASSERT (!EFI_ERROR (Status));
+ } else if (CC_GUEST_IS_TDX (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ Status = MemEncryptTdxSetPageSharedBit (
+ 0,
+ SharedAddress,
+ EFI_SIZE_TO_PAGES (MemRange->DataSize)
+ );
+ ASSERT (!EFI_ERROR (Status));
+ } else {
+ ASSERT (FALSE);
+ }
}
PhysicalAddress += (MemRange->Slots * SIZE_OF_MEM_RANGE (MemRange));
@@ -190,6 +206,7 @@ IoMmuReleaseReservedSharedMem (
EFI_STATUS Status;
UINT32 Index1, Index2;
IOMMU_RESERVED_MEM_RANGE *MemRange;
+ UINT64 SharedAddress;
if (!mReservedSharedMemSupported) {
return EFI_SUCCESS;
@@ -198,12 +215,25 @@ IoMmuReleaseReservedSharedMem (
for (Index1 = 0; Index1 < ARRAY_SIZE (mReservedMemRanges); Index1++) {
MemRange = &mReservedMemRanges[Index1];
for (Index2 = 0; Index2 < MemRange->Slots; Index2++) {
- Status = MemEncryptTdxClearPageSharedBit (
- 0,
- (UINT64)(UINTN)(MemRange->StartAddressOfMemRange + Index2 * SIZE_OF_MEM_RANGE (MemRange) + MemRange->HeaderSize),
- EFI_SIZE_TO_PAGES (MemRange->DataSize)
- );
- ASSERT (!EFI_ERROR (Status));
+ SharedAddress = (UINT64)(UINTN)(MemRange->StartAddressOfMemRange + Index2 * SIZE_OF_MEM_RANGE (MemRange) + MemRange->HeaderSize);
+
+ if (CC_GUEST_IS_SEV (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ Status = MemEncryptSevSetPageEncMask (
+ 0,
+ SharedAddress,
+ EFI_SIZE_TO_PAGES (MemRange->DataSize)
+ );
+ ASSERT (!EFI_ERROR (Status));
+ } else if (CC_GUEST_IS_TDX (PcdGet64 (PcdConfidentialComputingGuestAttr))) {
+ Status = MemEncryptTdxClearPageSharedBit (
+ 0,
+ SharedAddress,
+ EFI_SIZE_TO_PAGES (MemRange->DataSize)
+ );
+ ASSERT (!EFI_ERROR (Status));
+ } else {
+ ASSERT (FALSE);
+ }
}
}
--
2.29.2.windows.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH V3 4/4] Maintainers: Update OvmfPkg/IoMmuDxe
2022-12-15 5:18 [PATCH V3 0/4] Reserve shared memory for DMA operation Min Xu
` (2 preceding siblings ...)
2022-12-15 5:18 ` [PATCH V3 3/4] OvmfPkg/IoMmuDxe: Add SEV support for reserved shared memory Min Xu
@ 2022-12-15 5:18 ` Min Xu
2022-12-17 3:11 ` [PATCH V3 0/4] Reserve shared memory for DMA operation Yao, Jiewen
4 siblings, 0 replies; 8+ messages in thread
From: Min Xu @ 2022-12-15 5:18 UTC (permalink / raw)
To: devel
Cc: Min M Xu, Michael D Kinney, Liming Gao, Erdem Aktas,
Gerd Hoffmann, James Bottomley, Jiewen Yao, Tom Lendacky,
Jiewen Yao
From: Min M Xu <min.m.xu@intel.com>
https://bugzilla.tianocore.org/show_bug.cgi?id=4171
AmdSevIoMmu.* is renamed as CcIoMmu*. The related section in
Maintainers.txt should be updated as well.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Jiewen Yao <Jiewen.yao@intel.com>
Signed-off-by: Min Xu <min.m.xu@intel.com>
---
Maintainers.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Maintainers.txt b/Maintainers.txt
index 7e98083685bc..9293a3bb91be 100644
--- a/Maintainers.txt
+++ b/Maintainers.txt
@@ -478,7 +478,7 @@ F: OvmfPkg/AmdSev/
F: OvmfPkg/AmdSevDxe/
F: OvmfPkg/Include/Guid/ConfidentialComputingSecret.h
F: OvmfPkg/Include/Library/MemEncryptSevLib.h
-F: OvmfPkg/IoMmuDxe/AmdSevIoMmu.*
+F: OvmfPkg/IoMmuDxe/CcIoMmu.*
F: OvmfPkg/Library/BaseMemEncryptSevLib/
F: OvmfPkg/Library/PlatformBootManagerLibGrub/
F: OvmfPkg/Library/CcExitLib/
--
2.29.2.windows.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH V3 0/4] Reserve shared memory for DMA operation
2022-12-15 5:18 [PATCH V3 0/4] Reserve shared memory for DMA operation Min Xu
` (3 preceding siblings ...)
2022-12-15 5:18 ` [PATCH V3 4/4] Maintainers: Update OvmfPkg/IoMmuDxe Min Xu
@ 2022-12-17 3:11 ` Yao, Jiewen
2022-12-17 14:18 ` Lendacky, Thomas
4 siblings, 1 reply; 8+ messages in thread
From: Yao, Jiewen @ 2022-12-17 3:11 UTC (permalink / raw)
To: Xu, Min M, devel@edk2.groups.io
Cc: Aktas, Erdem, James Bottomley, Gerd Hoffmann, Kinney, Michael D,
Gao, Liming, Tom Lendacky
Hi Tom
Any further discussion?
I would like to merge if no other feedback.
> -----Original Message-----
> From: Xu, Min M <min.m.xu@intel.com>
> Sent: Thursday, December 15, 2022 1:18 PM
> To: devel@edk2.groups.io
> Cc: Xu, Min M <min.m.xu@intel.com>; Aktas, Erdem
> <erdemaktas@google.com>; James Bottomley <jejb@linux.ibm.com>; Yao,
> Jiewen <jiewen.yao@intel.com>; Gerd Hoffmann <kraxel@redhat.com>;
> Kinney, Michael D <michael.d.kinney@intel.com>; Gao, Liming
> <gaoliming@byosoft.com.cn>; Tom Lendacky <thomas.lendacky@amd.com>
> Subject: [PATCH V3 0/4] Reserve shared memory for DMA operation
>
> BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4171
>
> This patch-set introduces the feature of reserving shared memory
> for DMA operation. Its intention is to reduce the allocation and
> conversion of private/shared memory, so that boot performance
> can be improved significantly. Detailed information is in Patch#1.
>
> Patch#2 renames AmdSevIoMmu.* to CcIoMmu.* because these 2 files
> support both SEV and TDX guest.
>
> Patch#3 is provided by Tom Lendacky which add SEV support for reserved
> shared memory.
>
> Patch#4 updates the related section in Maintainers.txt.
>
> Code: https://github.com/mxu9/edk2/tree/IoMmu.v3
>
> v3 changes:
> - Check mReservedSharedMemSupported in
> IoMmuReleaseReservedSharedMem
> before release the pre-allocated shared memory.
> - Delete a line of comment in Patch#3 which is suggested by Tom Lendacky.
>
> v2 changes:
> - Add Patch#3 which is provided by Tom Lendacky. It adds SEV support for
> reserved shared memory.
> - Add more description for mReservedMemRanges. It describes:
> 1) How the pre-allocated memory is managed.
> 2) What if the pre-allocated memory is used up.
>
> Cc: Erdem Aktas <erdemaktas@google.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Min Xu <min.m.xu@intel.com>
>
> Min M Xu (3):
> OvmfPkg/IoMmuDxe: Reserve shared memory region for DMA operation
> OvmfPkg/IoMmuDxe: Rename AmdSevIoMmu to CcIoMmu
> Maintainers: Update OvmfPkg/IoMmuDxe
>
> Tom Lendacky (1):
> OvmfPkg/IoMmuDxe: Add SEV support for reserved shared memory
>
> Maintainers.txt | 2 +-
> OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} | 204 ++++----
> OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} | 0
> OvmfPkg/IoMmuDxe/IoMmuBuffer.c | 495 ++++++++++++++++++
> OvmfPkg/IoMmuDxe/IoMmuDxe.c | 2 +-
> OvmfPkg/IoMmuDxe/IoMmuDxe.inf | 6 +-
> OvmfPkg/IoMmuDxe/IoMmuInternal.h | 179 +++++++
> 7 files changed, 781 insertions(+), 107 deletions(-)
> rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} (85%)
> rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} (100%)
> create mode 100644 OvmfPkg/IoMmuDxe/IoMmuBuffer.c
> create mode 100644 OvmfPkg/IoMmuDxe/IoMmuInternal.h
>
> --
> 2.29.2.windows.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH V3 0/4] Reserve shared memory for DMA operation
2022-12-17 3:11 ` [PATCH V3 0/4] Reserve shared memory for DMA operation Yao, Jiewen
@ 2022-12-17 14:18 ` Lendacky, Thomas
2022-12-18 2:15 ` Yao, Jiewen
0 siblings, 1 reply; 8+ messages in thread
From: Lendacky, Thomas @ 2022-12-17 14:18 UTC (permalink / raw)
To: Yao, Jiewen, Xu, Min M, devel@edk2.groups.io
Cc: Aktas, Erdem, James Bottomley, Gerd Hoffmann, Kinney, Michael D,
Gao, Liming
On 12/16/22 21:11, Yao, Jiewen wrote:
> Hi Tom
> Any further discussion?
>
> I would like to merge if no other feedback.
No other feedback from me. For the series:
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
>
>> -----Original Message-----
>> From: Xu, Min M <min.m.xu@intel.com>
>> Sent: Thursday, December 15, 2022 1:18 PM
>> To: devel@edk2.groups.io
>> Cc: Xu, Min M <min.m.xu@intel.com>; Aktas, Erdem
>> <erdemaktas@google.com>; James Bottomley <jejb@linux.ibm.com>; Yao,
>> Jiewen <jiewen.yao@intel.com>; Gerd Hoffmann <kraxel@redhat.com>;
>> Kinney, Michael D <michael.d.kinney@intel.com>; Gao, Liming
>> <gaoliming@byosoft.com.cn>; Tom Lendacky <thomas.lendacky@amd.com>
>> Subject: [PATCH V3 0/4] Reserve shared memory for DMA operation
>>
>> BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4171
>>
>> This patch-set introduces the feature of reserving shared memory
>> for DMA operation. Its intention is to reduce the allocation and
>> conversion of private/shared memory, so that boot performance
>> can be improved significantly. Detailed information is in Patch#1.
>>
>> Patch#2 renames AmdSevIoMmu.* to CcIoMmu.* because these 2 files
>> support both SEV and TDX guest.
>>
>> Patch#3 is provided by Tom Lendacky which add SEV support for reserved
>> shared memory.
>>
>> Patch#4 updates the related section in Maintainers.txt.
>>
>> Code: https://github.com/mxu9/edk2/tree/IoMmu.v3
>>
>> v3 changes:
>> - Check mReservedSharedMemSupported in
>> IoMmuReleaseReservedSharedMem
>> before release the pre-allocated shared memory.
>> - Delete a line of comment in Patch#3 which is suggested by Tom Lendacky.
>>
>> v2 changes:
>> - Add Patch#3 which is provided by Tom Lendacky. It adds SEV support for
>> reserved shared memory.
>> - Add more description for mReservedMemRanges. It describes:
>> 1) How the pre-allocated memory is managed.
>> 2) What if the pre-allocated memory is used up.
>>
>> Cc: Erdem Aktas <erdemaktas@google.com>
>> Cc: James Bottomley <jejb@linux.ibm.com>
>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>> Cc: Michael D Kinney <michael.d.kinney@intel.com>
>> Cc: Liming Gao <gaoliming@byosoft.com.cn>
>> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
>> Signed-off-by: Min Xu <min.m.xu@intel.com>
>>
>> Min M Xu (3):
>> OvmfPkg/IoMmuDxe: Reserve shared memory region for DMA operation
>> OvmfPkg/IoMmuDxe: Rename AmdSevIoMmu to CcIoMmu
>> Maintainers: Update OvmfPkg/IoMmuDxe
>>
>> Tom Lendacky (1):
>> OvmfPkg/IoMmuDxe: Add SEV support for reserved shared memory
>>
>> Maintainers.txt | 2 +-
>> OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} | 204 ++++----
>> OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} | 0
>> OvmfPkg/IoMmuDxe/IoMmuBuffer.c | 495 ++++++++++++++++++
>> OvmfPkg/IoMmuDxe/IoMmuDxe.c | 2 +-
>> OvmfPkg/IoMmuDxe/IoMmuDxe.inf | 6 +-
>> OvmfPkg/IoMmuDxe/IoMmuInternal.h | 179 +++++++
>> 7 files changed, 781 insertions(+), 107 deletions(-)
>> rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} (85%)
>> rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} (100%)
>> create mode 100644 OvmfPkg/IoMmuDxe/IoMmuBuffer.c
>> create mode 100644 OvmfPkg/IoMmuDxe/IoMmuInternal.h
>>
>> --
>> 2.29.2.windows.2
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH V3 0/4] Reserve shared memory for DMA operation
2022-12-17 14:18 ` Lendacky, Thomas
@ 2022-12-18 2:15 ` Yao, Jiewen
0 siblings, 0 replies; 8+ messages in thread
From: Yao, Jiewen @ 2022-12-18 2:15 UTC (permalink / raw)
To: Tom Lendacky, Xu, Min M, devel@edk2.groups.io
Cc: Aktas, Erdem, James Bottomley, Gerd Hoffmann, Kinney, Michael D,
Gao, Liming
Merged https://github.com/tianocore/edk2/pull/3784
> -----Original Message-----
> From: Tom Lendacky <thomas.lendacky@amd.com>
> Sent: Saturday, December 17, 2022 10:18 PM
> To: Yao, Jiewen <jiewen.yao@intel.com>; Xu, Min M <min.m.xu@intel.com>;
> devel@edk2.groups.io
> Cc: Aktas, Erdem <erdemaktas@google.com>; James Bottomley
> <jejb@linux.ibm.com>; Gerd Hoffmann <kraxel@redhat.com>; Kinney,
> Michael D <michael.d.kinney@intel.com>; Gao, Liming
> <gaoliming@byosoft.com.cn>
> Subject: Re: [PATCH V3 0/4] Reserve shared memory for DMA operation
>
> On 12/16/22 21:11, Yao, Jiewen wrote:
> > Hi Tom
> > Any further discussion?
> >
> > I would like to merge if no other feedback.
>
> No other feedback from me. For the series:
>
> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
>
> >
> >> -----Original Message-----
> >> From: Xu, Min M <min.m.xu@intel.com>
> >> Sent: Thursday, December 15, 2022 1:18 PM
> >> To: devel@edk2.groups.io
> >> Cc: Xu, Min M <min.m.xu@intel.com>; Aktas, Erdem
> >> <erdemaktas@google.com>; James Bottomley <jejb@linux.ibm.com>; Yao,
> >> Jiewen <jiewen.yao@intel.com>; Gerd Hoffmann <kraxel@redhat.com>;
> >> Kinney, Michael D <michael.d.kinney@intel.com>; Gao, Liming
> >> <gaoliming@byosoft.com.cn>; Tom Lendacky
> <thomas.lendacky@amd.com>
> >> Subject: [PATCH V3 0/4] Reserve shared memory for DMA operation
> >>
> >> BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=4171
> >>
> >> This patch-set introduces the feature of reserving shared memory
> >> for DMA operation. Its intention is to reduce the allocation and
> >> conversion of private/shared memory, so that boot performance
> >> can be improved significantly. Detailed information is in Patch#1.
> >>
> >> Patch#2 renames AmdSevIoMmu.* to CcIoMmu.* because these 2 files
> >> support both SEV and TDX guest.
> >>
> >> Patch#3 is provided by Tom Lendacky which add SEV support for reserved
> >> shared memory.
> >>
> >> Patch#4 updates the related section in Maintainers.txt.
> >>
> >> Code: https://github.com/mxu9/edk2/tree/IoMmu.v3
> >>
> >> v3 changes:
> >> - Check mReservedSharedMemSupported in
> >> IoMmuReleaseReservedSharedMem
> >> before release the pre-allocated shared memory.
> >> - Delete a line of comment in Patch#3 which is suggested by Tom
> Lendacky.
> >>
> >> v2 changes:
> >> - Add Patch#3 which is provided by Tom Lendacky. It adds SEV support
> for
> >> reserved shared memory.
> >> - Add more description for mReservedMemRanges. It describes:
> >> 1) How the pre-allocated memory is managed.
> >> 2) What if the pre-allocated memory is used up.
> >>
> >> Cc: Erdem Aktas <erdemaktas@google.com>
> >> Cc: James Bottomley <jejb@linux.ibm.com>
> >> Cc: Jiewen Yao <jiewen.yao@intel.com>
> >> Cc: Gerd Hoffmann <kraxel@redhat.com>
> >> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> >> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> >> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
> >> Signed-off-by: Min Xu <min.m.xu@intel.com>
> >>
> >> Min M Xu (3):
> >> OvmfPkg/IoMmuDxe: Reserve shared memory region for DMA
> operation
> >> OvmfPkg/IoMmuDxe: Rename AmdSevIoMmu to CcIoMmu
> >> Maintainers: Update OvmfPkg/IoMmuDxe
> >>
> >> Tom Lendacky (1):
> >> OvmfPkg/IoMmuDxe: Add SEV support for reserved shared memory
> >>
> >> Maintainers.txt | 2 +-
> >> OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} | 204 ++++----
> >> OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} | 0
> >> OvmfPkg/IoMmuDxe/IoMmuBuffer.c | 495
> ++++++++++++++++++
> >> OvmfPkg/IoMmuDxe/IoMmuDxe.c | 2 +-
> >> OvmfPkg/IoMmuDxe/IoMmuDxe.inf | 6 +-
> >> OvmfPkg/IoMmuDxe/IoMmuInternal.h | 179 +++++++
> >> 7 files changed, 781 insertions(+), 107 deletions(-)
> >> rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.c => CcIoMmu.c} (85%)
> >> rename OvmfPkg/IoMmuDxe/{AmdSevIoMmu.h => CcIoMmu.h} (100%)
> >> create mode 100644 OvmfPkg/IoMmuDxe/IoMmuBuffer.c
> >> create mode 100644 OvmfPkg/IoMmuDxe/IoMmuInternal.h
> >>
> >> --
> >> 2.29.2.windows.2
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2022-12-18 2:15 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-12-15 5:18 [PATCH V3 0/4] Reserve shared memory for DMA operation Min Xu
2022-12-15 5:18 ` [PATCH V3 1/4] OvmfPkg/IoMmuDxe: Reserve shared memory region " Min Xu
2022-12-15 5:18 ` [PATCH V3 2/4] OvmfPkg/IoMmuDxe: Rename AmdSevIoMmu to CcIoMmu Min Xu
2022-12-15 5:18 ` [PATCH V3 3/4] OvmfPkg/IoMmuDxe: Add SEV support for reserved shared memory Min Xu
2022-12-15 5:18 ` [PATCH V3 4/4] Maintainers: Update OvmfPkg/IoMmuDxe Min Xu
2022-12-17 3:11 ` [PATCH V3 0/4] Reserve shared memory for DMA operation Yao, Jiewen
2022-12-17 14:18 ` Lendacky, Thomas
2022-12-18 2:15 ` Yao, Jiewen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox