* [PATCH v7 0/2] Add IntelVTdDmarPei Driver
@ 2021-01-08 2:49 Sheng Wei
2021-01-08 2:49 ` [PATCH v7 1/2] IntelSiliconPkg/VTd: " Sheng Wei
2021-01-08 2:49 ` [PATCH v7 2/2] IntelSiliconPkg/VTd: Add IntelVTdDmarPei to IntelSiliconPkg Sheng Wei
0 siblings, 2 replies; 3+ messages in thread
From: Sheng Wei @ 2021-01-08 2:49 UTC (permalink / raw)
To: devel; +Cc: Ray Ni, Rangasai V Chaganty, Jiewen Yao, Jenny Huang, Feng Roger
IntelVTdDmarPei Driver is used for DMA protection in PEI phase.
In pre-memory phase, All the DMA access is blocked.
In post memory phase, it will set some memory in iommu DMA remapping
tranlation table for DMA access, and only the memory allocated by
EdkiiIoMmuPpi has the DMA access. For rest of the memory, DMA access
will be blocked.
Add IntelVTdDmarPei to IntelSiliconPkg.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3095
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rangasai V Chaganty <rangasai.v.chaganty@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jenny Huang <jenny.huang@intel.com>
Cc: Feng Roger <roger.feng@intel.com>
Reviewed-by: Jenny Huang <jenny.huang@intel.com>
Sheng Wei (2):
IntelSiliconPkg/VTd: Add IntelVTdDmarPei Driver
IntelSiliconPkg/VTd: Add IntelVTdDmarPei to IntelSiliconPkg
.../Feature/VTd/IntelVTdDmarPei/DmarTable.c | 818 +++++++++++++++
.../Feature/VTd/IntelVTdDmarPei/IntelVTdDmar.c | 466 +++++++++
.../Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.c | 814 +++++++++++++++
.../Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.h | 224 +++++
.../VTd/IntelVTdDmarPei/IntelVTdDmarPei.inf | 65 ++
.../VTd/IntelVTdDmarPei/IntelVTdDmarPei.uni | 14 +
.../VTd/IntelVTdDmarPei/IntelVTdDmarPeiExtra.uni | 14 +
.../Feature/VTd/IntelVTdDmarPei/TranslationTable.c | 1045 ++++++++++++++++++++
Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dsc | 1 +
9 files changed, 3461 insertions(+)
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/DmarTable.c
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmar.c
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.c
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.h
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.inf
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.uni
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPeiExtra.uni
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/TranslationTable.c
--
2.16.2.windows.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v7 1/2] IntelSiliconPkg/VTd: Add IntelVTdDmarPei Driver
2021-01-08 2:49 [PATCH v7 0/2] Add IntelVTdDmarPei Driver Sheng Wei
@ 2021-01-08 2:49 ` Sheng Wei
2021-01-08 2:49 ` [PATCH v7 2/2] IntelSiliconPkg/VTd: Add IntelVTdDmarPei to IntelSiliconPkg Sheng Wei
1 sibling, 0 replies; 3+ messages in thread
From: Sheng Wei @ 2021-01-08 2:49 UTC (permalink / raw)
To: devel; +Cc: Ray Ni, Rangasai V Chaganty, Jiewen Yao, Jenny Huang, Feng Roger
IntelVTdDmarPei Driver is used for DMA protection in PEI phase.
In pre-memory phase, All the DMA access is blocked.
In post memory phase, it will set some memory in iommu DMA remapping
tranlation table for DMA access, and only the memory allocated by
EdkiiIoMmuPpi has the DMA access. For rest of the memory, DMA access
will be blocked.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3095
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rangasai V Chaganty <rangasai.v.chaganty@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jenny Huang <jenny.huang@intel.com>
Cc: Feng Roger <roger.feng@intel.com>
Reviewed-by: Jenny Huang <jenny.huang@intel.com>
---
.../Feature/VTd/IntelVTdDmarPei/DmarTable.c | 818 +++++++++++++++
.../Feature/VTd/IntelVTdDmarPei/IntelVTdDmar.c | 466 +++++++++
.../Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.c | 814 +++++++++++++++
.../Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.h | 224 +++++
.../VTd/IntelVTdDmarPei/IntelVTdDmarPei.inf | 65 ++
.../VTd/IntelVTdDmarPei/IntelVTdDmarPei.uni | 14 +
| 14 +
.../Feature/VTd/IntelVTdDmarPei/TranslationTable.c | 1045 ++++++++++++++++++++
8 files changed, 3460 insertions(+)
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/DmarTable.c
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmar.c
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.c
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.h
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.inf
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.uni
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPeiExtra.uni
create mode 100644 Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/TranslationTable.c
diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/DmarTable.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/DmarTable.c
new file mode 100644
index 00000000..d188f917
--- /dev/null
+++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/DmarTable.c
@@ -0,0 +1,818 @@
+/** @file
+
+ Copyright (c) 2020, Intel Corporation. All rights reserved.<BR>
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#include <Uefi.h>
+#include <PiPei.h>
+#include <Library/BaseLib.h>
+#include <Library/BaseMemoryLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Library/DebugLib.h>
+#include <Library/HobLib.h>
+#include <Library/PciSegmentLib.h>
+#include <IndustryStandard/Vtd.h>
+#include <IndustryStandard/Pci.h>
+#include <Protocol/IoMmu.h>
+#include <Ppi/VtdInfo.h>
+
+#include "IntelVTdDmarPei.h"
+
+/**
+ Dump DMAR DeviceScopeEntry.
+
+ @param[in] DmarDeviceScopeEntry DMAR DeviceScopeEntry
+**/
+VOID
+DumpDmarDeviceScopeEntry (
+ IN EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDeviceScopeEntry
+ )
+{
+ UINTN PciPathNumber;
+ UINTN PciPathIndex;
+ EFI_ACPI_DMAR_PCI_PATH *PciPath;
+
+ if (DmarDeviceScopeEntry == NULL) {
+ return;
+ }
+
+ DEBUG ((DEBUG_INFO,
+ " *************************************************************************\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ " * DMA-Remapping Device Scope Entry Structure *\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ " *************************************************************************\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ (sizeof (UINTN) == sizeof (UINT64)) ?
+ " DMAR Device Scope Entry address ...................... 0x%016lx\n" :
+ " DMAR Device Scope Entry address ...................... 0x%08x\n",
+ DmarDeviceScopeEntry
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Device Scope Entry Type ............................ 0x%02x\n",
+ DmarDeviceScopeEntry->Type
+ ));
+ switch (DmarDeviceScopeEntry->Type) {
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT:
+ DEBUG ((DEBUG_INFO,
+ " PCI Endpoint Device\n"
+ ));
+ break;
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE:
+ DEBUG ((DEBUG_INFO,
+ " PCI Sub-hierachy\n"
+ ));
+ break;
+ default:
+ break;
+ }
+ DEBUG ((DEBUG_INFO,
+ " Length ............................................. 0x%02x\n",
+ DmarDeviceScopeEntry->Length
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Enumeration ID ..................................... 0x%02x\n",
+ DmarDeviceScopeEntry->EnumerationId
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Starting Bus Number ................................ 0x%02x\n",
+ DmarDeviceScopeEntry->StartBusNumber
+ ));
+
+ PciPathNumber = (DmarDeviceScopeEntry->Length - sizeof (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER)) / sizeof (EFI_ACPI_DMAR_PCI_PATH);
+ PciPath = (EFI_ACPI_DMAR_PCI_PATH *) (DmarDeviceScopeEntry + 1);
+ for (PciPathIndex = 0; PciPathIndex < PciPathNumber; PciPathIndex++) {
+ DEBUG ((DEBUG_INFO,
+ " Device ............................................. 0x%02x\n",
+ PciPath[PciPathIndex].Device
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Function ........................................... 0x%02x\n",
+ PciPath[PciPathIndex].Function
+ ));
+ }
+
+ DEBUG ((DEBUG_INFO,
+ " *************************************************************************\n\n"
+ ));
+
+ return;
+}
+
+/**
+ Dump DMAR RMRR table.
+
+ @param[in] Rmrr DMAR RMRR table
+**/
+VOID
+DumpDmarRmrr (
+ IN EFI_ACPI_DMAR_RMRR_HEADER *Rmrr
+ )
+{
+ EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDeviceScopeEntry;
+ INTN RmrrLen;
+
+ if (Rmrr == NULL) {
+ return;
+ }
+
+ DEBUG ((DEBUG_INFO,
+ " ***************************************************************************\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ " * Reserved Memory Region Reporting Structure *\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ " ***************************************************************************\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ (sizeof (UINTN) == sizeof (UINT64)) ?
+ " RMRR address ........................................... 0x%016lx\n" :
+ " RMRR address ........................................... 0x%08x\n",
+ Rmrr
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Type ................................................. 0x%04x\n",
+ Rmrr->Header.Type
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Length ............................................... 0x%04x\n",
+ Rmrr->Header.Length
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Segment Number ....................................... 0x%04x\n",
+ Rmrr->SegmentNumber
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Reserved Memory Region Base Address .................. 0x%016lx\n",
+ Rmrr->ReservedMemoryRegionBaseAddress
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Reserved Memory Region Limit Address ................. 0x%016lx\n",
+ Rmrr->ReservedMemoryRegionLimitAddress
+ ));
+
+ RmrrLen = Rmrr->Header.Length - sizeof(EFI_ACPI_DMAR_RMRR_HEADER);
+ DmarDeviceScopeEntry = (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *) (Rmrr + 1);
+ while (RmrrLen > 0) {
+ DumpDmarDeviceScopeEntry (DmarDeviceScopeEntry);
+ RmrrLen -= DmarDeviceScopeEntry->Length;
+ DmarDeviceScopeEntry = (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *) ((UINTN) DmarDeviceScopeEntry + DmarDeviceScopeEntry->Length);
+ }
+
+ DEBUG ((DEBUG_INFO,
+ " ***************************************************************************\n\n"
+ ));
+
+ return;
+}
+
+/**
+ Dump DMAR DRHD table.
+
+ @param[in] Drhd DMAR DRHD table
+**/
+VOID
+DumpDmarDrhd (
+ IN EFI_ACPI_DMAR_DRHD_HEADER *Drhd
+ )
+{
+ EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDeviceScopeEntry;
+ INTN DrhdLen;
+
+ if (Drhd == NULL) {
+ return;
+ }
+
+ DEBUG ((DEBUG_INFO,
+ " ***************************************************************************\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ " * DMA-Remapping Hardware Definition Structure *\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ " ***************************************************************************\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ (sizeof (UINTN) == sizeof (UINT64)) ?
+ " DRHD address ........................................... 0x%016lx\n" :
+ " DRHD address ........................................... 0x%08x\n",
+ Drhd
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Type ................................................. 0x%04x\n",
+ Drhd->Header.Type
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Length ............................................... 0x%04x\n",
+ Drhd->Header.Length
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Flags ................................................ 0x%02x\n",
+ Drhd->Flags
+ ));
+ DEBUG ((DEBUG_INFO,
+ " INCLUDE_PCI_ALL .................................... 0x%02x\n",
+ Drhd->Flags & EFI_ACPI_DMAR_DRHD_FLAGS_INCLUDE_PCI_ALL
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Segment Number ....................................... 0x%04x\n",
+ Drhd->SegmentNumber
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Register Base Address ................................ 0x%016lx\n",
+ Drhd->RegisterBaseAddress
+ ));
+
+ DrhdLen = Drhd->Header.Length - sizeof (EFI_ACPI_DMAR_DRHD_HEADER);
+ DmarDeviceScopeEntry = (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *) (Drhd + 1);
+ while (DrhdLen > 0) {
+ DumpDmarDeviceScopeEntry (DmarDeviceScopeEntry);
+ DrhdLen -= DmarDeviceScopeEntry->Length;
+ DmarDeviceScopeEntry = (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *) ((UINTN) DmarDeviceScopeEntry + DmarDeviceScopeEntry->Length);
+ }
+
+ DEBUG ((DEBUG_INFO,
+ " ***************************************************************************\n\n"
+ ));
+
+ return;
+}
+
+/**
+ Dump DMAR ACPI table.
+
+ @param[in] Dmar DMAR ACPI table
+**/
+VOID
+DumpAcpiDMAR (
+ IN EFI_ACPI_DMAR_HEADER *Dmar
+ )
+{
+ EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader;
+ INTN DmarLen;
+
+ if (Dmar == NULL) {
+ return;
+ }
+
+ //
+ // Dump Dmar table
+ //
+ DEBUG ((DEBUG_INFO,
+ "*****************************************************************************\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ "* DMAR Table *\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ "*****************************************************************************\n"
+ ));
+
+ DEBUG ((DEBUG_INFO,
+ (sizeof (UINTN) == sizeof (UINT64)) ?
+ "DMAR address ............................................. 0x%016lx\n" :
+ "DMAR address ............................................. 0x%08x\n",
+ Dmar
+ ));
+
+ DEBUG ((DEBUG_INFO,
+ " Table Contents:\n"
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Host Address Width ................................... 0x%02x\n",
+ Dmar->HostAddressWidth
+ ));
+ DEBUG ((DEBUG_INFO,
+ " Flags ................................................ 0x%02x\n",
+ Dmar->Flags
+ ));
+ DEBUG ((DEBUG_INFO,
+ " INTR_REMAP ......................................... 0x%02x\n",
+ Dmar->Flags & EFI_ACPI_DMAR_FLAGS_INTR_REMAP
+ ));
+ DEBUG ((DEBUG_INFO,
+ " X2APIC_OPT_OUT_SET ................................. 0x%02x\n",
+ Dmar->Flags & EFI_ACPI_DMAR_FLAGS_X2APIC_OPT_OUT
+ ));
+ DEBUG ((DEBUG_INFO,
+ " DMA_CTRL_PLATFORM_OPT_IN_FLAG ...................... 0x%02x\n",
+ Dmar->Flags & EFI_ACPI_DMAR_FLAGS_DMA_CTRL_PLATFORM_OPT_IN_FLAG
+ ));
+
+ DmarLen = Dmar->Header.Length - sizeof (EFI_ACPI_DMAR_HEADER);
+ DmarHeader = (EFI_ACPI_DMAR_STRUCTURE_HEADER *) (Dmar + 1);
+ while (DmarLen > 0) {
+ switch (DmarHeader->Type) {
+ case EFI_ACPI_DMAR_TYPE_DRHD:
+ DumpDmarDrhd ((EFI_ACPI_DMAR_DRHD_HEADER *) DmarHeader);
+ break;
+ case EFI_ACPI_DMAR_TYPE_RMRR:
+ DumpDmarRmrr ((EFI_ACPI_DMAR_RMRR_HEADER *) DmarHeader);
+ break;
+ default:
+ break;
+ }
+ DmarLen -= DmarHeader->Length;
+ DmarHeader = (EFI_ACPI_DMAR_STRUCTURE_HEADER *) ((UINTN) DmarHeader + DmarHeader->Length);
+ }
+
+ DEBUG ((DEBUG_INFO,
+ "*****************************************************************************\n\n"
+ ));
+
+ return;
+}
+
+/**
+ Get VTd engine number.
+
+ @param[in] AcpiDmarTable DMAR ACPI table
+
+ @return the VTd engine number.
+**/
+UINTN
+GetVtdEngineNumber (
+ IN EFI_ACPI_DMAR_HEADER *AcpiDmarTable
+ )
+{
+ EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader;
+ UINTN VtdIndex;
+
+ VtdIndex = 0;
+ DmarHeader = (EFI_ACPI_DMAR_STRUCTURE_HEADER *) ((UINTN) (AcpiDmarTable + 1));
+ while ((UINTN) DmarHeader < (UINTN) AcpiDmarTable + AcpiDmarTable->Header.Length) {
+ switch (DmarHeader->Type) {
+ case EFI_ACPI_DMAR_TYPE_DRHD:
+ VtdIndex++;
+ break;
+ default:
+ break;
+ }
+ DmarHeader = (EFI_ACPI_DMAR_STRUCTURE_HEADER *) ((UINTN) DmarHeader + DmarHeader->Length);
+ }
+ return VtdIndex ;
+}
+
+/**
+ Get PCI device information from DMAR DevScopeEntry.
+
+ @param[in] Segment The segment number.
+ @param[in] DmarDevScopeEntry DMAR DevScopeEntry
+ @param[out] Bus The bus number.
+ @param[out] Device The device number.
+ @param[out] Function The function number.
+
+ @retval EFI_SUCCESS The PCI device information is returned.
+**/
+EFI_STATUS
+GetPciBusDeviceFunction (
+ IN UINT16 Segment,
+ IN EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDevScopeEntry,
+ OUT UINT8 *Bus,
+ OUT UINT8 *Device,
+ OUT UINT8 *Function
+ )
+{
+ EFI_ACPI_DMAR_PCI_PATH *DmarPciPath;
+ UINT8 MyBus;
+ UINT8 MyDevice;
+ UINT8 MyFunction;
+
+ DmarPciPath = (EFI_ACPI_DMAR_PCI_PATH *) ((UINTN) (DmarDevScopeEntry + 1));
+ MyBus = DmarDevScopeEntry->StartBusNumber;
+ MyDevice = DmarPciPath->Device;
+ MyFunction = DmarPciPath->Function;
+
+ switch (DmarDevScopeEntry->Type) {
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT:
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE:
+ while ((UINTN) DmarPciPath + sizeof (EFI_ACPI_DMAR_PCI_PATH) < (UINTN) DmarDevScopeEntry + DmarDevScopeEntry->Length) {
+ MyBus = PciSegmentRead8 (PCI_SEGMENT_LIB_ADDRESS (Segment, MyBus, MyDevice, MyFunction, PCI_BRIDGE_SECONDARY_BUS_REGISTER_OFFSET));
+ DmarPciPath ++;
+ MyDevice = DmarPciPath->Device;
+ MyFunction = DmarPciPath->Function;
+ }
+ break;
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_IOAPIC:
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_MSI_CAPABLE_HPET:
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_ACPI_NAMESPACE_DEVICE:
+ break;
+ }
+
+ *Bus = MyBus;
+ *Device = MyDevice;
+ *Function = MyFunction;
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Return the index of PCI data.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] Segment The Segment used to identify a VTd engine.
+ @param[in] SourceId The SourceId used to identify a VTd engine and table entry.
+
+ @return The index of the PCI data.
+ @retval (UINTN)-1 The PCI data is not found.
+**/
+UINTN
+GetPciDataIndex (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN UINT16 Segment,
+ IN VTD_SOURCE_ID SourceId
+ )
+{
+ UINTN Index;
+ VTD_SOURCE_ID *PciSourceId;
+ PEI_PCI_DEVICE_DATA *PciDeviceDataBase;
+
+ if (Segment != VTdUnitInfo->Segment) {
+ return (UINTN)-1;
+ }
+
+ for (Index = 0; Index < VTdUnitInfo->PciDeviceInfo.PciDeviceDataNumber; Index++) {
+ PciDeviceDataBase = (PEI_PCI_DEVICE_DATA*) (UINTN) VTdUnitInfo->PciDeviceInfo.PciDeviceData;
+ PciSourceId = &PciDeviceDataBase[Index].PciSourceId;
+ if ((PciSourceId->Bits.Bus == SourceId.Bits.Bus) &&
+ (PciSourceId->Bits.Device == SourceId.Bits.Device) &&
+ (PciSourceId->Bits.Function == SourceId.Bits.Function) ) {
+ return Index;
+ }
+ }
+
+ return (UINTN)-1;
+}
+
+
+/**
+ Register PCI device to VTd engine.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] Segment The segment of the source.
+ @param[in] SourceId The SourceId of the source.
+ @param[in] DeviceType The DMAR device scope type.
+ @param[in] CheckExist TRUE: ERROR will be returned if the PCI device is already registered.
+ FALSE: SUCCESS will be returned if the PCI device is registered.
+
+ @retval EFI_SUCCESS The PCI device is registered.
+ @retval EFI_OUT_OF_RESOURCES No enough resource to register a new PCI device.
+ @retval EFI_ALREADY_STARTED The device is already registered.
+
+**/
+EFI_STATUS
+RegisterPciDevice (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN UINT16 Segment,
+ IN VTD_SOURCE_ID SourceId,
+ IN UINT8 DeviceType,
+ IN BOOLEAN CheckExist
+ )
+{
+ PEI_PCI_DEVICE_INFORMATION *PciDeviceInfo;
+ VTD_SOURCE_ID *PciSourceId;
+ UINTN PciDataIndex;
+ UINTN PciDeviceDataSize;
+ PEI_PCI_DEVICE_DATA *NewPciDeviceData;
+ PEI_PCI_DEVICE_DATA *PciDeviceDataBase;
+
+ PciDeviceInfo = &VTdUnitInfo->PciDeviceInfo;
+
+ PciDataIndex = GetPciDataIndex (VTdUnitInfo, Segment, SourceId);
+ if (PciDataIndex == (UINTN)-1) {
+ //
+ // Register new
+ //
+
+ if (PciDeviceInfo->PciDeviceDataNumber >= PciDeviceInfo->PciDeviceDataMaxNumber) {
+ //
+ // Reallocate
+ //
+ PciDeviceDataSize = sizeof(*NewPciDeviceData) * (PciDeviceInfo->PciDeviceDataMaxNumber + MAX_VTD_PCI_DATA_NUMBER);
+ DEBUG ((DEBUG_INFO, "New PciDeviceDataSize:%d Page:%d\n", PciDeviceDataSize, EFI_SIZE_TO_PAGES (PciDeviceDataSize)));
+ NewPciDeviceData = AllocateZeroPages (EFI_SIZE_TO_PAGES(PciDeviceDataSize));
+ if (NewPciDeviceData == NULL) {
+ return EFI_OUT_OF_RESOURCES;
+ }
+ PciDeviceInfo->PciDeviceDataMaxNumber += MAX_VTD_PCI_DATA_NUMBER;
+ if (PciDeviceInfo->PciDeviceData != 0) {
+ CopyMem (NewPciDeviceData, (VOID *) (UINTN) PciDeviceInfo->PciDeviceData, sizeof (*NewPciDeviceData) * PciDeviceInfo->PciDeviceDataNumber);
+ FreePages((VOID *) (UINTN) PciDeviceInfo->PciDeviceData, PciDeviceInfo->PciDeviceDataPageSize);
+ }
+ PciDeviceInfo->PciDeviceData = (UINT32) (UINTN) NewPciDeviceData;
+ PciDeviceInfo->PciDeviceDataPageSize = (UINT32) EFI_SIZE_TO_PAGES (PciDeviceDataSize);
+ }
+
+ ASSERT (PciDeviceInfo->PciDeviceDataNumber < PciDeviceInfo->PciDeviceDataMaxNumber);
+
+ PciDeviceDataBase = (PEI_PCI_DEVICE_DATA *) (UINTN) PciDeviceInfo->PciDeviceData;
+ PciSourceId = &PciDeviceDataBase[PciDeviceInfo->PciDeviceDataNumber].PciSourceId;
+ PciSourceId->Bits.Bus = SourceId.Bits.Bus;
+ PciSourceId->Bits.Device = SourceId.Bits.Device;
+ PciSourceId->Bits.Function = SourceId.Bits.Function;
+
+ DEBUG ((DEBUG_INFO, " RegisterPciDevice: PCI S%04x B%02x D%02x F%02x", Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function));
+
+ PciDeviceDataBase[PciDeviceInfo->PciDeviceDataNumber].DeviceType = DeviceType;
+
+ if ((DeviceType != EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT) &&
+ (DeviceType != EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE)) {
+ DEBUG ((DEBUG_INFO, " (*)"));
+ }
+ DEBUG ((DEBUG_INFO, "\n"));
+
+ PciDeviceInfo->PciDeviceDataNumber++;
+ } else {
+ if (CheckExist) {
+ DEBUG ((DEBUG_INFO, " RegisterPciDevice: PCI S%04x B%02x D%02x F%02x already registered\n", Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function));
+ return EFI_ALREADY_STARTED;
+ }
+ }
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Process DMAR DHRD table.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] DmarDrhd The DRHD table.
+
+**/
+VOID
+ProcessDhrd (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN EFI_ACPI_DMAR_DRHD_HEADER *DmarDrhd
+ )
+{
+ EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDevScopeEntry;
+ UINT8 Bus;
+ UINT8 Device;
+ UINT8 Function;
+ EFI_STATUS Status;
+ VTD_SOURCE_ID SourceId;
+
+ DEBUG ((DEBUG_INFO," VTD BaseAddress - 0x%016lx\n", DmarDrhd->RegisterBaseAddress));
+ VTdUnitInfo->VtdUnitBaseAddress = (UINT32) DmarDrhd->RegisterBaseAddress;
+
+ DEBUG ((DEBUG_INFO," VTD Segment - %d\n", DmarDrhd->SegmentNumber));
+ VTdUnitInfo->Segment = DmarDrhd->SegmentNumber;
+
+ VTdUnitInfo->FixedSecondLevelPagingEntry = 0;
+ VTdUnitInfo->RmrrSecondLevelPagingEntry = 0;
+ VTdUnitInfo->RootEntryTable = 0;
+ VTdUnitInfo->ExtRootEntryTable = 0;
+ VTdUnitInfo->RootEntryTablePageSize = 0;
+ VTdUnitInfo->ExtRootEntryTablePageSize = 0;
+
+ VTdUnitInfo->PciDeviceInfo.IncludeAllFlag = 0;
+ VTdUnitInfo->PciDeviceInfo.PciDeviceDataMaxNumber = 0;
+ VTdUnitInfo->PciDeviceInfo.PciDeviceDataNumber = 0;
+ VTdUnitInfo->PciDeviceInfo.PciDeviceDataPageSize = 0;
+ VTdUnitInfo->PciDeviceInfo.PciDeviceData = 0;
+
+ if ((DmarDrhd->Flags & EFI_ACPI_DMAR_DRHD_FLAGS_INCLUDE_PCI_ALL) != 0) {
+ VTdUnitInfo->PciDeviceInfo.IncludeAllFlag = TRUE;
+ DEBUG ((DEBUG_INFO," ProcessDhrd: with INCLUDE ALL\n"));
+ } else {
+ VTdUnitInfo->PciDeviceInfo.IncludeAllFlag = FALSE;
+ DEBUG ((DEBUG_INFO," ProcessDhrd: without INCLUDE ALL\n"));
+ }
+
+ VTdUnitInfo->PciDeviceInfo.PciDeviceDataNumber = 0;
+ VTdUnitInfo->PciDeviceInfo.PciDeviceDataMaxNumber = 0;
+ VTdUnitInfo->PciDeviceInfo.PciDeviceDataPageSize = 0;
+ VTdUnitInfo->PciDeviceInfo.PciDeviceData = 0;
+
+ DmarDevScopeEntry = (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *) ((UINTN) (DmarDrhd + 1));
+ while ((UINTN)DmarDevScopeEntry < (UINTN) DmarDrhd + DmarDrhd->Header.Length) {
+
+ Status = GetPciBusDeviceFunction (DmarDrhd->SegmentNumber, DmarDevScopeEntry, &Bus, &Device, &Function);
+ if (EFI_ERROR (Status)) {
+ return;
+ }
+
+ DEBUG ((DEBUG_INFO," ProcessDhrd: "));
+ switch (DmarDevScopeEntry->Type) {
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT:
+ DEBUG ((DEBUG_INFO,"PCI Endpoint"));
+ break;
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_BRIDGE:
+ DEBUG ((DEBUG_INFO,"PCI-PCI bridge"));
+ break;
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_IOAPIC:
+ DEBUG ((DEBUG_INFO,"IOAPIC"));
+ break;
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_MSI_CAPABLE_HPET:
+ DEBUG ((DEBUG_INFO,"MSI Capable HPET"));
+ break;
+ case EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_ACPI_NAMESPACE_DEVICE:
+ DEBUG ((DEBUG_INFO,"ACPI Namespace Device"));
+ break;
+ }
+ DEBUG ((DEBUG_INFO," S%04x B%02x D%02x F%02x\n", DmarDrhd->SegmentNumber, Bus, Device, Function));
+
+ SourceId.Bits.Bus = Bus;
+ SourceId.Bits.Device = Device;
+ SourceId.Bits.Function = Function;
+
+ Status = RegisterPciDevice (VTdUnitInfo, DmarDrhd->SegmentNumber, SourceId, DmarDevScopeEntry->Type, TRUE);
+ if (EFI_ERROR (Status)) {
+ DEBUG ((DEBUG_ERROR,"RegisterPciDevice Failed !\n"));
+ }
+
+ DmarDevScopeEntry = (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *) ((UINTN) DmarDevScopeEntry + DmarDevScopeEntry->Length);
+ }
+}
+
+/**
+ Dump the PCI device information managed by this VTd engine.
+
+ @param[in] VTdInfo The VTd engine context information.
+ @param[in] VtdIndex The index of VTd engine.
+
+**/
+VOID
+DumpPciDeviceInfo (
+ IN VTD_INFO *VTdInfo,
+ IN UINTN VtdIndex
+ )
+{
+ UINTN Index;
+ PEI_PCI_DEVICE_DATA *PciDeviceDataBase;
+
+ DEBUG ((DEBUG_INFO,"PCI Device Information (Number 0x%x, IncludeAll - %d):\n",
+ VTdInfo->VtdUnitInfo[VtdIndex].PciDeviceInfo.PciDeviceDataNumber,
+ VTdInfo->VtdUnitInfo[VtdIndex].PciDeviceInfo.IncludeAllFlag
+ ));
+
+ PciDeviceDataBase = (PEI_PCI_DEVICE_DATA *) (UINTN) VTdInfo->VtdUnitInfo[VtdIndex].PciDeviceInfo.PciDeviceData;
+
+ for (Index = 0; Index < VTdInfo->VtdUnitInfo[VtdIndex].PciDeviceInfo.PciDeviceDataNumber; Index++) {
+ DEBUG ((DEBUG_INFO," S%04x B%02x D%02x F%02x\n",
+ VTdInfo->VtdUnitInfo[VtdIndex].Segment,
+ PciDeviceDataBase[Index].PciSourceId.Bits.Bus,
+ PciDeviceDataBase[Index].PciSourceId.Bits.Device,
+ PciDeviceDataBase[Index].PciSourceId.Bits.Function
+ ));
+ }
+}
+
+/**
+ Parse DMAR DRHD table.
+
+ @param[in] AcpiDmarTable DMAR ACPI table
+
+ @return EFI_SUCCESS The DMAR DRHD table is parsed.
+
+**/
+EFI_STATUS
+ParseDmarAcpiTableDrhd (
+ IN EFI_ACPI_DMAR_HEADER *AcpiDmarTable
+ )
+{
+ EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader;
+ UINTN VtdUnitNumber;
+ UINTN VtdIndex;
+ VTD_INFO *VTdInfo;
+
+ VtdUnitNumber = GetVtdEngineNumber (AcpiDmarTable);
+ if (VtdUnitNumber == 0) {
+ return EFI_UNSUPPORTED;
+ }
+
+ VTdInfo = BuildGuidHob (&mVTdInfoGuid, sizeof (VTD_INFO) + (VtdUnitNumber - 1) * sizeof (VTD_UNIT_INFO));
+ ASSERT(VTdInfo != NULL);
+ if (VTdInfo == NULL) {
+ return EFI_OUT_OF_RESOURCES;
+ }
+
+ //
+ // Initialize the engine mask to all.
+ //
+ VTdInfo->AcpiDmarTable = (UINT32) (UINTN) AcpiDmarTable;
+ VTdInfo->HostAddressWidth = AcpiDmarTable->HostAddressWidth;
+ VTdInfo->VTdEngineCount = (UINT32) VtdUnitNumber;
+
+ VtdIndex = 0;
+ DmarHeader = (EFI_ACPI_DMAR_STRUCTURE_HEADER *) ((UINTN) (AcpiDmarTable + 1));
+ while ((UINTN) DmarHeader < (UINTN) AcpiDmarTable + AcpiDmarTable->Header.Length) {
+ switch (DmarHeader->Type) {
+ case EFI_ACPI_DMAR_TYPE_DRHD:
+ ASSERT (VtdIndex < VtdUnitNumber);
+ ProcessDhrd (&VTdInfo->VtdUnitInfo[VtdIndex], (EFI_ACPI_DMAR_DRHD_HEADER *) DmarHeader);
+ VtdIndex++;
+
+ break;
+
+ default:
+ break;
+ }
+ DmarHeader = (EFI_ACPI_DMAR_STRUCTURE_HEADER *) ((UINTN) DmarHeader + DmarHeader->Length);
+ }
+ ASSERT (VtdIndex == VtdUnitNumber);
+
+ for (VtdIndex = 0; VtdIndex < VtdUnitNumber; VtdIndex++) {
+ DumpPciDeviceInfo (VTdInfo, VtdIndex);
+ }
+
+ return EFI_SUCCESS;
+}
+
+
+/**
+ Process DMAR RMRR table.
+
+ @param[in] VTdInfo The VTd engine context information.
+ @param[in] DmarRmrr The RMRR table.
+
+**/
+VOID
+ProcessRmrr (
+ IN VTD_INFO *VTdInfo,
+ IN EFI_ACPI_DMAR_RMRR_HEADER *DmarRmrr
+ )
+{
+ EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *DmarDevScopeEntry;
+ UINT8 Bus;
+ UINT8 Device;
+ UINT8 Function;
+ EFI_STATUS Status;
+ VTD_SOURCE_ID SourceId;
+
+ DEBUG ((DEBUG_INFO," PEI RMRR (Base 0x%016lx, Limit 0x%016lx)\n", DmarRmrr->ReservedMemoryRegionBaseAddress, DmarRmrr->ReservedMemoryRegionLimitAddress));
+
+ if ((DmarRmrr->ReservedMemoryRegionBaseAddress == 0) ||
+ (DmarRmrr->ReservedMemoryRegionLimitAddress == 0)) {
+ return ;
+ }
+
+ DmarDevScopeEntry = (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *) ((UINTN) (DmarRmrr + 1));
+ while ((UINTN) DmarDevScopeEntry < (UINTN) DmarRmrr + DmarRmrr->Header.Length) {
+ if (DmarDevScopeEntry->Type != EFI_ACPI_DEVICE_SCOPE_ENTRY_TYPE_PCI_ENDPOINT) {
+ DEBUG ((DEBUG_INFO,"RMRR DevScopeEntryType is not endpoint, type[0x%x] \n", DmarDevScopeEntry->Type));
+ return;
+ }
+
+ Status = GetPciBusDeviceFunction (DmarRmrr->SegmentNumber, DmarDevScopeEntry, &Bus, &Device, &Function);
+ if (EFI_ERROR (Status)) {
+ continue;
+ }
+
+ DEBUG ((DEBUG_INFO,"RMRR S%04x B%02x D%02x F%02x\n", DmarRmrr->SegmentNumber, Bus, Device, Function));
+
+ SourceId.Bits.Bus = Bus;
+ SourceId.Bits.Device = Device;
+ SourceId.Bits.Function = Function;
+
+ Status = EnableRmrrPageAttribute (
+ VTdInfo,
+ DmarRmrr->SegmentNumber,
+ SourceId,
+ DmarRmrr->ReservedMemoryRegionBaseAddress,
+ DmarRmrr->ReservedMemoryRegionLimitAddress,
+ EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE
+ );
+ if (EFI_ERROR (Status)) {
+ DEBUG ((DEBUG_INFO, "EnableRmrrPageAttribute : %r\n", Status));
+ }
+
+ DmarDevScopeEntry = (EFI_ACPI_DMAR_DEVICE_SCOPE_STRUCTURE_HEADER *) ((UINTN) DmarDevScopeEntry + DmarDevScopeEntry->Length);
+ }
+}
+
+/**
+ Parse DMAR DRHD table.
+
+ @param[in] VTdInfo The VTd engine context information.
+
+**/
+VOID
+ParseDmarAcpiTableRmrr (
+ IN VTD_INFO *VTdInfo
+ )
+{
+ EFI_ACPI_DMAR_HEADER *AcpiDmarTable;
+ EFI_ACPI_DMAR_STRUCTURE_HEADER *DmarHeader;
+
+ AcpiDmarTable = (EFI_ACPI_DMAR_HEADER *) (UINTN) VTdInfo->AcpiDmarTable;
+
+ DmarHeader = (EFI_ACPI_DMAR_STRUCTURE_HEADER *) ((UINTN) (AcpiDmarTable + 1));
+ while ((UINTN) DmarHeader < (UINTN) AcpiDmarTable + AcpiDmarTable->Header.Length) {
+ switch (DmarHeader->Type) {
+ case EFI_ACPI_DMAR_TYPE_RMRR:
+ ProcessRmrr (VTdInfo, (EFI_ACPI_DMAR_RMRR_HEADER *) DmarHeader);
+ break;
+ default:
+ break;
+ }
+ DmarHeader = (EFI_ACPI_DMAR_STRUCTURE_HEADER *) ((UINTN) DmarHeader + DmarHeader->Length);
+ }
+}
+
diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmar.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmar.c
new file mode 100644
index 00000000..9ad2a494
--- /dev/null
+++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmar.c
@@ -0,0 +1,466 @@
+/** @file
+
+ Copyright (c) 2020, Intel Corporation. All rights reserved.<BR>
+
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#include <PiPei.h>
+#include <Library/BaseLib.h>
+#include <Library/BaseMemoryLib.h>
+#include <Library/IoLib.h>
+#include <Library/DebugLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Library/CacheMaintenanceLib.h>
+#include <Library/PeiServicesLib.h>
+#include <IndustryStandard/Vtd.h>
+#include <Ppi/VtdInfo.h>
+#include <Ppi/VtdNullRootEntryTable.h>
+#include <Ppi/IoMmu.h>
+#include "IntelVTdDmarPei.h"
+
+/**
+ Flush VTD page table and context table memory.
+
+ This action is to make sure the IOMMU engine can get final data in memory.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] Base The base address of memory to be flushed.
+ @param[in] Size The size of memory in bytes to be flushed.
+**/
+VOID
+FlushPageTableMemory (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN UINTN Base,
+ IN UINTN Size
+ )
+{
+ if (VTdUnitInfo->ECapReg.Bits.C == 0) {
+ WriteBackDataCacheRange ((VOID *) Base, Size);
+ }
+}
+
+/**
+ Flush VTd engine write buffer.
+
+ @param[in] VtdUnitBaseAddress The base address of the VTd engine.
+**/
+VOID
+FlushWriteBuffer (
+ IN UINTN VtdUnitBaseAddress
+ )
+{
+ UINT32 Reg32;
+ VTD_CAP_REG CapReg;
+
+ CapReg.Uint64 = MmioRead64 (VtdUnitBaseAddress + R_CAP_REG);
+
+ if (CapReg.Bits.RWBF != 0) {
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG);
+ MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, Reg32 | B_GMCD_REG_WBF);
+ do {
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG);
+ } while ((Reg32 & B_GSTS_REG_WBF) != 0);
+ }
+}
+
+/**
+ Invalidate VTd context cache.
+
+ @param[in] VtdUnitBaseAddress The base address of the VTd engine.
+**/
+EFI_STATUS
+InvalidateContextCache (
+ IN UINTN VtdUnitBaseAddress
+ )
+{
+ UINT64 Reg64;
+
+ Reg64 = MmioRead64 (VtdUnitBaseAddress + R_CCMD_REG);
+ if ((Reg64 & B_CCMD_REG_ICC) != 0) {
+ DEBUG ((DEBUG_ERROR,"ERROR: InvalidateContextCache: B_CCMD_REG_ICC is set for VTD(%x)\n",VtdUnitBaseAddress));
+ return EFI_DEVICE_ERROR;
+ }
+
+ Reg64 &= ((~B_CCMD_REG_ICC) & (~B_CCMD_REG_CIRG_MASK));
+ Reg64 |= (B_CCMD_REG_ICC | V_CCMD_REG_CIRG_GLOBAL);
+ MmioWrite64 (VtdUnitBaseAddress + R_CCMD_REG, Reg64);
+
+ do {
+ Reg64 = MmioRead64 (VtdUnitBaseAddress + R_CCMD_REG);
+ } while ((Reg64 & B_CCMD_REG_ICC) != 0);
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Invalidate VTd IOTLB.
+
+ @param[in] VtdUnitBaseAddress The base address of the VTd engine.
+**/
+EFI_STATUS
+InvalidateIOTLB (
+ IN UINTN VtdUnitBaseAddress
+ )
+{
+ UINT64 Reg64;
+ VTD_ECAP_REG ECapReg;
+
+ ECapReg.Uint64 = MmioRead64 (VtdUnitBaseAddress + R_ECAP_REG);
+
+ Reg64 = MmioRead64 (VtdUnitBaseAddress + (ECapReg.Bits.IRO * 16) + R_IOTLB_REG);
+ if ((Reg64 & B_IOTLB_REG_IVT) != 0) {
+ DEBUG ((DEBUG_ERROR, "ERROR: InvalidateIOTLB: B_IOTLB_REG_IVT is set for VTD(%x)\n", VtdUnitBaseAddress));
+ return EFI_DEVICE_ERROR;
+ }
+
+ Reg64 &= ((~B_IOTLB_REG_IVT) & (~B_IOTLB_REG_IIRG_MASK));
+ Reg64 |= (B_IOTLB_REG_IVT | V_IOTLB_REG_IIRG_GLOBAL);
+ MmioWrite64 (VtdUnitBaseAddress + (ECapReg.Bits.IRO * 16) + R_IOTLB_REG, Reg64);
+
+ do {
+ Reg64 = MmioRead64 (VtdUnitBaseAddress + (ECapReg.Bits.IRO * 16) + R_IOTLB_REG);
+ } while ((Reg64 & B_IOTLB_REG_IVT) != 0);
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Enable DMAR translation.
+
+ @param[in] VtdUnitBaseAddress The base address of the VTd engine.
+ @param[in] RootEntryTable The address of the VTd RootEntryTable.
+
+ @retval EFI_SUCCESS DMAR translation is enabled.
+ @retval EFI_DEVICE_ERROR DMAR translation is not enabled.
+**/
+EFI_STATUS
+EnableDmar (
+ IN UINTN VtdUnitBaseAddress,
+ IN UINTN RootEntryTable
+ )
+{
+ UINT32 Reg32;
+
+ DEBUG ((DEBUG_INFO, ">>>>>>EnableDmar() for engine [%x] \n", VtdUnitBaseAddress));
+
+ DEBUG ((DEBUG_INFO, "RootEntryTable 0x%x \n", RootEntryTable));
+ MmioWrite64 (VtdUnitBaseAddress + R_RTADDR_REG, (UINT64) (UINTN) RootEntryTable);
+
+ MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, B_GMCD_REG_SRTP);
+
+ DEBUG ((DEBUG_INFO, "EnableDmar: waiting for RTPS bit to be set... \n"));
+ do {
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG);
+ } while((Reg32 & B_GSTS_REG_RTPS) == 0);
+
+ //
+ // Init DMAr Fault Event and Data registers
+ //
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_FEDATA_REG);
+
+ //
+ // Write Buffer Flush before invalidation
+ //
+ FlushWriteBuffer (VtdUnitBaseAddress);
+
+ //
+ // Invalidate the context cache
+ //
+ InvalidateContextCache (VtdUnitBaseAddress);
+
+ //
+ // Invalidate the IOTLB cache
+ //
+ InvalidateIOTLB (VtdUnitBaseAddress);
+
+ //
+ // Enable VTd
+ //
+ MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, B_GMCD_REG_TE);
+ DEBUG ((DEBUG_INFO, "EnableDmar: Waiting B_GSTS_REG_TE ...\n"));
+ do {
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG);
+ } while ((Reg32 & B_GSTS_REG_TE) == 0);
+
+ DEBUG ((DEBUG_INFO, "VTD () enabled!<<<<<<\n"));
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Disable DMAR translation.
+
+ @param[in] VtdUnitBaseAddress The base address of the VTd engine.
+
+ @retval EFI_SUCCESS DMAR translation is disabled.
+ @retval EFI_DEVICE_ERROR DMAR translation is not disabled.
+**/
+EFI_STATUS
+DisableDmar (
+ IN UINTN VtdUnitBaseAddress
+ )
+{
+ UINT32 Reg32;
+ UINT32 Status;
+ UINT32 Command;
+
+ DEBUG ((DEBUG_INFO, ">>>>>>DisableDmar() for engine [%x] \n", VtdUnitBaseAddress));
+
+ //
+ // Write Buffer Flush before invalidation
+ //
+ FlushWriteBuffer (VtdUnitBaseAddress);
+
+ //
+ // Disable Dmar
+ //
+ //
+ // Set TE (Translation Enable: BIT31) of Global command register to zero
+ //
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG);
+ Status = (Reg32 & 0x96FFFFFF); // Reset the one-shot bits
+ Command = (Status & ~B_GMCD_REG_TE);
+ MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, Command);
+
+ //
+ // Poll on TE Status bit of Global status register to become zero
+ //
+ do {
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG);
+ } while ((Reg32 & B_GSTS_REG_TE) == B_GSTS_REG_TE);
+
+ //
+ // Set SRTP (Set Root Table Pointer: BIT30) of Global command register in order to update the root table pointerDisable VTd
+ //
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG);
+ Status = (Reg32 & 0x96FFFFFF); // Reset the one-shot bits
+ Command = (Status | B_GMCD_REG_SRTP);
+ MmioWrite32 (VtdUnitBaseAddress + R_GCMD_REG, Command);
+ do {
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG);
+ } while((Reg32 & B_GSTS_REG_RTPS) == 0);
+
+ Reg32 = MmioRead32 (VtdUnitBaseAddress + R_GSTS_REG);
+ DEBUG((DEBUG_INFO, "DisableDmar: GSTS_REG - 0x%08x\n", Reg32));
+
+ MmioWrite64 (VtdUnitBaseAddress + R_RTADDR_REG, 0);
+
+ DEBUG ((DEBUG_INFO,"VTD () Disabled!<<<<<<\n"));
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Enable VTd translation table protection for all.
+
+ @param[in] VTdInfo The VTd engine context information.
+ @param[in] EngineMask The mask of the VTd engine to be accessed.
+**/
+VOID
+EnableVTdTranslationProtectionAll (
+ IN VTD_INFO *VTdInfo,
+ IN UINT64 EngineMask
+ )
+{
+ EFI_STATUS Status;
+ EDKII_VTD_NULL_ROOT_ENTRY_TABLE_PPI *RootEntryTable;
+ UINTN Index;
+
+ DEBUG ((DEBUG_INFO, "EnableVTdTranslationProtectionAll - 0x%lx\n", EngineMask));
+
+ Status = PeiServicesLocatePpi (
+ &gEdkiiVTdNullRootEntryTableGuid,
+ 0,
+ NULL,
+ (VOID **)&RootEntryTable
+ );
+ if (EFI_ERROR(Status)) {
+ DEBUG ((DEBUG_ERROR, "Locate Null Root Entry Table Ppi Failed : %r\n", Status));
+ ASSERT (FALSE);
+ return;
+ }
+
+ for (Index = 0; Index < VTdInfo->VTdEngineCount; Index++) {
+ if ((EngineMask & LShiftU64(1, Index)) == 0) {
+ continue;
+ }
+ EnableDmar ((UINTN) VTdInfo->VtdUnitInfo[Index].VtdUnitBaseAddress, (UINTN) *RootEntryTable);
+ }
+
+ return;
+}
+
+/**
+ Enable VTd translation table protection.
+
+ @param[in] VTdInfo The VTd engine context information.
+
+ @retval EFI_SUCCESS DMAR translation is enabled.
+ @retval EFI_DEVICE_ERROR DMAR translation is not enabled.
+**/
+EFI_STATUS
+EnableVTdTranslationProtection (
+ IN VTD_INFO *VTdInfo
+ )
+{
+ EFI_STATUS Status;
+ UINTN VtdIndex;
+
+ for (VtdIndex = 0; VtdIndex < VTdInfo->VTdEngineCount; VtdIndex++) {
+ if (VTdInfo->VtdUnitInfo[VtdIndex].ExtRootEntryTable != 0) {
+ DEBUG ((DEBUG_INFO, "EnableVtdDmar (%d) ExtRootEntryTable 0x%x\n", VtdIndex, VTdInfo->VtdUnitInfo[VtdIndex].ExtRootEntryTable));
+ Status = EnableDmar (VTdInfo->VtdUnitInfo[VtdIndex].VtdUnitBaseAddress, VTdInfo->VtdUnitInfo[VtdIndex].ExtRootEntryTable);
+ } else {
+ DEBUG ((DEBUG_INFO, "EnableVtdDmar (%d) RootEntryTable 0x%x\n", VtdIndex, VTdInfo->VtdUnitInfo[VtdIndex].RootEntryTable));
+ Status = EnableDmar (VTdInfo->VtdUnitInfo[VtdIndex].VtdUnitBaseAddress, VTdInfo->VtdUnitInfo[VtdIndex].RootEntryTable);
+ }
+ if (EFI_ERROR (Status)) {
+ DEBUG ((DEBUG_ERROR, "EnableVtdDmar (%d) Failed !\n", VtdIndex));
+ return Status;
+ }
+ }
+ return EFI_SUCCESS;
+}
+
+/**
+ Disable VTd translation table protection.
+
+ @param[in] VTdInfo The VTd engine context information.
+ @param[in] EngineMask The mask of the VTd engine to be accessed.
+**/
+VOID
+DisableVTdTranslationProtection (
+ IN VTD_INFO *VTdInfo,
+ IN UINT64 EngineMask
+ )
+{
+ UINTN Index;
+
+ DEBUG ((DEBUG_INFO, "DisableVTdTranslationProtection - 0x%lx\n", EngineMask));
+
+ for (Index = 0; Index < VTdInfo->VTdEngineCount; Index++) {
+ if ((EngineMask & LShiftU64(1, Index)) == 0) {
+ continue;
+ }
+ DisableDmar ((UINTN) VTdInfo->VtdUnitInfo[Index].VtdUnitBaseAddress);
+ }
+
+ return;
+}
+
+/**
+ Dump VTd capability registers.
+
+ @param[in] CapReg The capability register.
+**/
+VOID
+DumpVtdCapRegs (
+ IN VTD_CAP_REG *CapReg
+ )
+{
+ DEBUG ((DEBUG_INFO, " CapReg:\n", CapReg->Uint64));
+ DEBUG ((DEBUG_INFO, " ND - 0x%x\n", CapReg->Bits.ND));
+ DEBUG ((DEBUG_INFO, " AFL - 0x%x\n", CapReg->Bits.AFL));
+ DEBUG ((DEBUG_INFO, " RWBF - 0x%x\n", CapReg->Bits.RWBF));
+ DEBUG ((DEBUG_INFO, " PLMR - 0x%x\n", CapReg->Bits.PLMR));
+ DEBUG ((DEBUG_INFO, " PHMR - 0x%x\n", CapReg->Bits.PHMR));
+ DEBUG ((DEBUG_INFO, " CM - 0x%x\n", CapReg->Bits.CM));
+ DEBUG ((DEBUG_INFO, " SAGAW - 0x%x\n", CapReg->Bits.SAGAW));
+ DEBUG ((DEBUG_INFO, " MGAW - 0x%x\n", CapReg->Bits.MGAW));
+ DEBUG ((DEBUG_INFO, " ZLR - 0x%x\n", CapReg->Bits.ZLR));
+ DEBUG ((DEBUG_INFO, " FRO - 0x%x\n", CapReg->Bits.FRO));
+ DEBUG ((DEBUG_INFO, " SLLPS - 0x%x\n", CapReg->Bits.SLLPS));
+ DEBUG ((DEBUG_INFO, " PSI - 0x%x\n", CapReg->Bits.PSI));
+ DEBUG ((DEBUG_INFO, " NFR - 0x%x\n", CapReg->Bits.NFR));
+ DEBUG ((DEBUG_INFO, " MAMV - 0x%x\n", CapReg->Bits.MAMV));
+ DEBUG ((DEBUG_INFO, " DWD - 0x%x\n", CapReg->Bits.DWD));
+ DEBUG ((DEBUG_INFO, " DRD - 0x%x\n", CapReg->Bits.DRD));
+ DEBUG ((DEBUG_INFO, " FL1GP - 0x%x\n", CapReg->Bits.FL1GP));
+ DEBUG ((DEBUG_INFO, " PI - 0x%x\n", CapReg->Bits.PI));
+}
+
+/**
+ Dump VTd extended capability registers.
+
+ @param[in] ECapReg The extended capability register.
+**/
+VOID
+DumpVtdECapRegs (
+ IN VTD_ECAP_REG *ECapReg
+ )
+{
+ DEBUG ((DEBUG_INFO, " ECapReg:\n", ECapReg->Uint64));
+ DEBUG ((DEBUG_INFO, " C - 0x%x\n", ECapReg->Bits.C));
+ DEBUG ((DEBUG_INFO, " QI - 0x%x\n", ECapReg->Bits.QI));
+ DEBUG ((DEBUG_INFO, " DT - 0x%x\n", ECapReg->Bits.DT));
+ DEBUG ((DEBUG_INFO, " IR - 0x%x\n", ECapReg->Bits.IR));
+ DEBUG ((DEBUG_INFO, " EIM - 0x%x\n", ECapReg->Bits.EIM));
+ DEBUG ((DEBUG_INFO, " PT - 0x%x\n", ECapReg->Bits.PT));
+ DEBUG ((DEBUG_INFO, " SC - 0x%x\n", ECapReg->Bits.SC));
+ DEBUG ((DEBUG_INFO, " IRO - 0x%x\n", ECapReg->Bits.IRO));
+ DEBUG ((DEBUG_INFO, " MHMV - 0x%x\n", ECapReg->Bits.MHMV));
+ DEBUG ((DEBUG_INFO, " ECS - 0x%x\n", ECapReg->Bits.ECS));
+ DEBUG ((DEBUG_INFO, " MTS - 0x%x\n", ECapReg->Bits.MTS));
+ DEBUG ((DEBUG_INFO, " NEST - 0x%x\n", ECapReg->Bits.NEST));
+ DEBUG ((DEBUG_INFO, " DIS - 0x%x\n", ECapReg->Bits.DIS));
+ DEBUG ((DEBUG_INFO, " PASID - 0x%x\n", ECapReg->Bits.PASID));
+ DEBUG ((DEBUG_INFO, " PRS - 0x%x\n", ECapReg->Bits.PRS));
+ DEBUG ((DEBUG_INFO, " ERS - 0x%x\n", ECapReg->Bits.ERS));
+ DEBUG ((DEBUG_INFO, " SRS - 0x%x\n", ECapReg->Bits.SRS));
+ DEBUG ((DEBUG_INFO, " NWFS - 0x%x\n", ECapReg->Bits.NWFS));
+ DEBUG ((DEBUG_INFO, " EAFS - 0x%x\n", ECapReg->Bits.EAFS));
+ DEBUG ((DEBUG_INFO, " PSS - 0x%x\n", ECapReg->Bits.PSS));
+}
+
+/**
+ Prepare VTD configuration.
+
+ @param[in] VTdInfo The VTd engine context information.
+
+ @retval EFI_SUCCESS Prepare Vtd config success
+**/
+EFI_STATUS
+PrepareVtdConfig (
+ IN VTD_INFO *VTdInfo
+ )
+{
+ UINTN Index;
+ UINTN DomainNumber;
+
+ for (Index = 0; Index < VTdInfo->VTdEngineCount; Index++) {
+ DEBUG ((DEBUG_ERROR, "Dump VTd Capability (%d)\n", Index));
+ VTdInfo->VtdUnitInfo[Index].CapReg.Uint64 = MmioRead64 (VTdInfo->VtdUnitInfo[Index].VtdUnitBaseAddress + R_CAP_REG);
+ DumpVtdCapRegs (&VTdInfo->VtdUnitInfo[Index].CapReg);
+ VTdInfo->VtdUnitInfo[Index].ECapReg.Uint64 = MmioRead64 (VTdInfo->VtdUnitInfo[Index].VtdUnitBaseAddress + R_ECAP_REG);
+ DumpVtdECapRegs (&VTdInfo->VtdUnitInfo[Index].ECapReg);
+
+ VTdInfo->VtdUnitInfo[Index].Is5LevelPaging = FALSE;
+ if ((VTdInfo->VtdUnitInfo[Index].CapReg.Bits.SAGAW & BIT2) != 0) {
+ DEBUG ((DEBUG_INFO, "Support 4-level page-table on VTD %d\n", Index));
+ }
+ if ((VTdInfo->VtdUnitInfo[Index].CapReg.Bits.SAGAW & BIT3) != 0) {
+ DEBUG((DEBUG_INFO, "Support 5-level page-table on VTD %d\n", Index));
+ VTdInfo->VtdUnitInfo[Index].Is5LevelPaging = TRUE;
+
+ if ((VTdInfo->HostAddressWidth <= 48) &&
+ ((VTdInfo->VtdUnitInfo[Index].CapReg.Bits.SAGAW & BIT2) != 0)) {
+ DEBUG ((DEBUG_INFO, "Rollback to 4-level page-table on VTD %d\n", Index));
+ VTdInfo->VtdUnitInfo[Index].Is5LevelPaging = FALSE;
+ }
+ }
+ if ((VTdInfo->VtdUnitInfo[Index].CapReg.Bits.SAGAW & (BIT3 | BIT2)) == 0) {
+ DEBUG ((DEBUG_ERROR, "!!!! Page-table type 0x%X is not supported on VTD %d !!!!\n", Index, VTdInfo->VtdUnitInfo[Index].CapReg.Bits.SAGAW));
+ return EFI_UNSUPPORTED;
+ }
+
+ DomainNumber = (UINTN)1 << (UINT8) ((UINTN) VTdInfo->VtdUnitInfo[Index].CapReg.Bits.ND * 2 + 4);
+ if (VTdInfo->VtdUnitInfo[Index].PciDeviceInfo.PciDeviceDataNumber >= DomainNumber) {
+ DEBUG ((DEBUG_ERROR, "!!!! Pci device Number(0x%x) >= DomainNumber(0x%x) !!!!\n", VTdInfo->VtdUnitInfo[Index].PciDeviceInfo.PciDeviceDataNumber, DomainNumber));
+ return EFI_UNSUPPORTED;
+ }
+ }
+ return EFI_SUCCESS;
+}
+
diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.c
new file mode 100644
index 00000000..f3c4a2bc
--- /dev/null
+++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.c
@@ -0,0 +1,814 @@
+/** @file
+
+ Copyright (c) 2020, Intel Corporation. All rights reserved.<BR>
+
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#include <Uefi.h>
+#include <PiPei.h>
+#include <Library/BaseLib.h>
+#include <Library/BaseMemoryLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Library/IoLib.h>
+#include <Library/DebugLib.h>
+#include <Library/PeiServicesLib.h>
+#include <Library/HobLib.h>
+#include <IndustryStandard/Vtd.h>
+#include <Ppi/IoMmu.h>
+#include <Ppi/VtdInfo.h>
+#include <Ppi/MemoryDiscovered.h>
+#include <Ppi/EndOfPeiPhase.h>
+#include <Guid/VtdPmrInfoHob.h>
+#include "IntelVTdDmarPei.h"
+
+EFI_GUID mVTdInfoGuid = {
+ 0x222f5e30, 0x5cd, 0x49c6, { 0x8a, 0xc, 0x36, 0xd6, 0x58, 0x41, 0xe0, 0x82 }
+};
+
+EFI_GUID mDmaBufferInfoGuid = {
+ 0x7b624ec7, 0xfb67, 0x4f9c, { 0xb6, 0xb0, 0x4d, 0xfa, 0x9c, 0x88, 0x20, 0x39 }
+};
+
+#define MAP_INFO_SIGNATURE SIGNATURE_32 ('D', 'M', 'A', 'P')
+typedef struct {
+ UINT32 Signature;
+ EDKII_IOMMU_OPERATION Operation;
+ UINTN NumberOfBytes;
+ EFI_PHYSICAL_ADDRESS HostAddress;
+ EFI_PHYSICAL_ADDRESS DeviceAddress;
+} MAP_INFO;
+
+/**
+ Set IOMMU attribute for a system memory.
+
+ If the IOMMU PPI exists, the system memory cannot be used
+ for DMA by default.
+
+ When a device requests a DMA access for a system memory,
+ the device driver need use SetAttribute() to update the IOMMU
+ attribute to request DMA access (read and/or write).
+
+ @param[in] This The PPI instance pointer.
+ @param[in] DeviceHandle The device who initiates the DMA access request.
+ @param[in] Mapping The mapping value returned from Map().
+ @param[in] IoMmuAccess The IOMMU access.
+
+ @retval EFI_SUCCESS The IoMmuAccess is set for the memory range specified by DeviceAddress and Length.
+ @retval EFI_INVALID_PARAMETER Mapping is not a value that was returned by Map().
+ @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combination of access.
+ @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not supported by the IOMMU.
+ @retval EFI_UNSUPPORTED The IOMMU does not support the memory range specified by Mapping.
+ @retval EFI_OUT_OF_RESOURCES There are not enough resources available to modify the IOMMU access.
+ @retval EFI_DEVICE_ERROR The IOMMU device reported an error while attempting the operation.
+ @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but DMA buffer are
+ not available to be allocated yet.
+**/
+EFI_STATUS
+EFIAPI
+PeiIoMmuSetAttribute (
+ IN EDKII_IOMMU_PPI *This,
+ IN VOID *Mapping,
+ IN UINT64 IoMmuAccess
+ )
+{
+ VOID *Hob;
+ DMA_BUFFER_INFO *DmaBufferInfo;
+
+ DEBUG ((DEBUG_INFO, "PeiIoMmuSetAttribute:\n"));
+
+ Hob = GetFirstGuidHob (&mDmaBufferInfoGuid);
+ DmaBufferInfo = GET_GUID_HOB_DATA(Hob);
+
+ if (DmaBufferInfo->DmaBufferCurrentTop == 0) {
+ DEBUG ((DEBUG_INFO, "PeiIoMmuSetAttribute: DmaBufferCurrentTop == 0\n"));
+ return EFI_NOT_AVAILABLE_YET;
+ }
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Provides the controller-specific addresses required to access system memory from a
+ DMA bus master.
+
+ @param This The PPI instance pointer.
+ @param Operation Indicates if the bus master is going to read or write to system memory.
+ @param HostAddress The system memory address to map to the PCI controller.
+ @param NumberOfBytes On input the number of bytes to map. On output the number of bytes
+ that were mapped.
+ @param DeviceAddress The resulting map address for the bus master PCI controller to use to
+ access the hosts HostAddress.
+ @param Mapping A resulting value to pass to Unmap().
+
+ @retval EFI_SUCCESS The range was mapped for the returned NumberOfBytes.
+ @retval EFI_UNSUPPORTED The HostAddress cannot be mapped as a common buffer.
+ @retval EFI_INVALID_PARAMETER One or more parameters are invalid.
+ @retval EFI_OUT_OF_RESOURCES The request could not be completed due to a lack of resources.
+ @retval EFI_DEVICE_ERROR The system hardware could not map the requested address.
+ @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but DMA buffer are
+ not available to be allocated yet.
+**/
+EFI_STATUS
+EFIAPI
+PeiIoMmuMap (
+ IN EDKII_IOMMU_PPI *This,
+ IN EDKII_IOMMU_OPERATION Operation,
+ IN VOID *HostAddress,
+ IN OUT UINTN *NumberOfBytes,
+ OUT EFI_PHYSICAL_ADDRESS *DeviceAddress,
+ OUT VOID **Mapping
+ )
+{
+ MAP_INFO *MapInfo;
+ UINTN Length;
+ VOID *Hob;
+ DMA_BUFFER_INFO *DmaBufferInfo;
+
+ Hob = GetFirstGuidHob (&mDmaBufferInfoGuid);
+ DmaBufferInfo = GET_GUID_HOB_DATA(Hob);
+
+ DEBUG ((DEBUG_INFO, "PeiIoMmuMap - HostAddress - 0x%x, NumberOfBytes - %x\n", HostAddress, *NumberOfBytes));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop - %x\n", DmaBufferInfo->DmaBufferCurrentTop));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom - %x\n", DmaBufferInfo->DmaBufferCurrentBottom));
+ DEBUG ((DEBUG_INFO, " Operation - %x\n", Operation));
+
+ if (DmaBufferInfo->DmaBufferCurrentTop == 0) {
+ return EFI_NOT_AVAILABLE_YET;
+ }
+
+ if (Operation == EdkiiIoMmuOperationBusMasterCommonBuffer ||
+ Operation == EdkiiIoMmuOperationBusMasterCommonBuffer64) {
+ *DeviceAddress = (UINTN)HostAddress;
+ *Mapping = NULL;
+ return EFI_SUCCESS;
+ }
+
+ Length = *NumberOfBytes + sizeof (MAP_INFO);
+ if (Length > DmaBufferInfo->DmaBufferCurrentTop - DmaBufferInfo->DmaBufferCurrentBottom) {
+ DEBUG ((DEBUG_ERROR, "PeiIoMmuMap - OUT_OF_RESOURCE\n"));
+ ASSERT (FALSE);
+ return EFI_OUT_OF_RESOURCES;
+ }
+
+ *DeviceAddress = DmaBufferInfo->DmaBufferCurrentBottom;
+ DmaBufferInfo->DmaBufferCurrentBottom += Length;
+
+ MapInfo = (VOID *) (UINTN) (*DeviceAddress + *NumberOfBytes);
+ MapInfo->Signature = MAP_INFO_SIGNATURE;
+ MapInfo->Operation = Operation;
+ MapInfo->NumberOfBytes = *NumberOfBytes;
+ MapInfo->HostAddress = (UINTN) HostAddress;
+ MapInfo->DeviceAddress = *DeviceAddress;
+ *Mapping = MapInfo;
+ DEBUG ((DEBUG_INFO, " Op(%x):DeviceAddress - %x, Mapping - %x\n", Operation, (UINTN) *DeviceAddress, MapInfo));
+
+ //
+ // If this is a read operation from the Bus Master's point of view,
+ // then copy the contents of the real buffer into the mapped buffer
+ // so the Bus Master can read the contents of the real buffer.
+ //
+ if (Operation == EdkiiIoMmuOperationBusMasterRead ||
+ Operation == EdkiiIoMmuOperationBusMasterRead64) {
+ CopyMem (
+ (VOID *) (UINTN) MapInfo->DeviceAddress,
+ (VOID *) (UINTN) MapInfo->HostAddress,
+ MapInfo->NumberOfBytes
+ );
+ }
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Completes the Map() operation and releases any corresponding resources.
+
+ @param This The PPI instance pointer.
+ @param Mapping The mapping value returned from Map().
+
+ @retval EFI_SUCCESS The range was unmapped.
+ @retval EFI_INVALID_PARAMETER Mapping is not a value that was returned by Map().
+ @retval EFI_DEVICE_ERROR The data was not committed to the target system memory.
+ @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but DMA buffer are
+ not available to be allocated yet.
+**/
+EFI_STATUS
+EFIAPI
+PeiIoMmuUnmap (
+ IN EDKII_IOMMU_PPI *This,
+ IN VOID *Mapping
+ )
+{
+ MAP_INFO *MapInfo;
+ UINTN Length;
+ VOID *Hob;
+ DMA_BUFFER_INFO *DmaBufferInfo;
+
+ Hob = GetFirstGuidHob (&mDmaBufferInfoGuid);
+ DmaBufferInfo = GET_GUID_HOB_DATA(Hob);
+
+ DEBUG ((DEBUG_INFO, "PeiIoMmuUnmap - Mapping - %x\n", Mapping));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop - %x\n", DmaBufferInfo->DmaBufferCurrentTop));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom - %x\n", DmaBufferInfo->DmaBufferCurrentBottom));
+
+ if (DmaBufferInfo->DmaBufferCurrentTop == 0) {
+ return EFI_NOT_AVAILABLE_YET;
+ }
+
+ if (Mapping == NULL) {
+ return EFI_SUCCESS;
+ }
+
+ MapInfo = Mapping;
+ ASSERT (MapInfo->Signature == MAP_INFO_SIGNATURE);
+ DEBUG ((DEBUG_INFO, " Op(%x):DeviceAddress - %x, NumberOfBytes - %x\n", MapInfo->Operation, (UINTN) MapInfo->DeviceAddress, MapInfo->NumberOfBytes));
+
+ //
+ // If this is a write operation from the Bus Master's point of view,
+ // then copy the contents of the mapped buffer into the real buffer
+ // so the processor can read the contents of the real buffer.
+ //
+ if (MapInfo->Operation == EdkiiIoMmuOperationBusMasterWrite ||
+ MapInfo->Operation == EdkiiIoMmuOperationBusMasterWrite64) {
+ CopyMem (
+ (VOID *) (UINTN) MapInfo->HostAddress,
+ (VOID *) (UINTN) MapInfo->DeviceAddress,
+ MapInfo->NumberOfBytes
+ );
+ }
+
+ Length = MapInfo->NumberOfBytes + sizeof (MAP_INFO);
+ if (DmaBufferInfo->DmaBufferCurrentBottom == MapInfo->DeviceAddress + Length) {
+ DmaBufferInfo->DmaBufferCurrentBottom -= Length;
+ }
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Allocates pages that are suitable for an OperationBusMasterCommonBuffer or
+ OperationBusMasterCommonBuffer64 mapping.
+
+ @param This The PPI instance pointer.
+ @param MemoryType The type of memory to allocate, EfiBootServicesData or
+ EfiRuntimeServicesData.
+ @param Pages The number of pages to allocate.
+ @param HostAddress A pointer to store the base system memory address of the
+ allocated range.
+ @param Attributes The requested bit mask of attributes for the allocated range.
+
+ @retval EFI_SUCCESS The requested memory pages were allocated.
+ @retval EFI_UNSUPPORTED Attributes is unsupported. The only legal attribute bits are
+ MEMORY_WRITE_COMBINE, MEMORY_CACHED and DUAL_ADDRESS_CYCLE.
+ @retval EFI_INVALID_PARAMETER One or more parameters are invalid.
+ @retval EFI_OUT_OF_RESOURCES The memory pages could not be allocated.
+ @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but DMA buffer are
+ not available to be allocated yet.
+**/
+EFI_STATUS
+EFIAPI
+PeiIoMmuAllocateBuffer (
+ IN EDKII_IOMMU_PPI *This,
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN UINTN Pages,
+ IN OUT VOID **HostAddress,
+ IN UINT64 Attributes
+ )
+{
+ UINTN Length;
+ VOID *Hob;
+ DMA_BUFFER_INFO *DmaBufferInfo;
+
+ Hob = GetFirstGuidHob (&mDmaBufferInfoGuid);
+ DmaBufferInfo = GET_GUID_HOB_DATA(Hob);
+
+ DEBUG ((DEBUG_INFO, "PeiIoMmuAllocateBuffer - page - %x\n", Pages));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop - %x\n", DmaBufferInfo->DmaBufferCurrentTop));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom - %x\n", DmaBufferInfo->DmaBufferCurrentBottom));
+
+ if (DmaBufferInfo->DmaBufferCurrentTop == 0) {
+ return EFI_NOT_AVAILABLE_YET;
+ }
+
+ Length = EFI_PAGES_TO_SIZE (Pages);
+ if (Length > DmaBufferInfo->DmaBufferCurrentTop - DmaBufferInfo->DmaBufferCurrentBottom) {
+ DEBUG ((DEBUG_ERROR, "PeiIoMmuAllocateBuffer - OUT_OF_RESOURCE\n"));
+ ASSERT (FALSE);
+ return EFI_OUT_OF_RESOURCES;
+ }
+ *HostAddress = (VOID *) (UINTN) (DmaBufferInfo->DmaBufferCurrentTop - Length);
+ DmaBufferInfo->DmaBufferCurrentTop -= Length;
+
+ DEBUG ((DEBUG_INFO, "PeiIoMmuAllocateBuffer - allocate - %x\n", *HostAddress));
+ return EFI_SUCCESS;
+}
+
+/**
+ Frees memory that was allocated with AllocateBuffer().
+
+ @param This The PPI instance pointer.
+ @param Pages The number of pages to free.
+ @param HostAddress The base system memory address of the allocated range.
+
+ @retval EFI_SUCCESS The requested memory pages were freed.
+ @retval EFI_INVALID_PARAMETER The memory range specified by HostAddress and Pages
+ was not allocated with AllocateBuffer().
+ @retval EFI_NOT_AVAILABLE_YET DMA protection has been enabled, but DMA buffer are
+ not available to be allocated yet.
+**/
+EFI_STATUS
+EFIAPI
+PeiIoMmuFreeBuffer (
+ IN EDKII_IOMMU_PPI *This,
+ IN UINTN Pages,
+ IN VOID *HostAddress
+ )
+{
+ UINTN Length;
+ VOID *Hob;
+ DMA_BUFFER_INFO *DmaBufferInfo;
+
+ Hob = GetFirstGuidHob (&mDmaBufferInfoGuid);
+ DmaBufferInfo = GET_GUID_HOB_DATA (Hob);
+
+ DEBUG ((DEBUG_INFO, "PeiIoMmuFreeBuffer - page - %x, HostAddr - %x\n", Pages, HostAddress));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop - %x\n", DmaBufferInfo->DmaBufferCurrentTop));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom - %x\n", DmaBufferInfo->DmaBufferCurrentBottom));
+
+ if (DmaBufferInfo->DmaBufferCurrentTop == 0) {
+ return EFI_NOT_AVAILABLE_YET;
+ }
+
+ Length = EFI_PAGES_TO_SIZE (Pages);
+ if ((UINTN)HostAddress == DmaBufferInfo->DmaBufferCurrentTop) {
+ DmaBufferInfo->DmaBufferCurrentTop += Length;
+ }
+
+ return EFI_SUCCESS;
+}
+
+EDKII_IOMMU_PPI mIoMmuPpi = {
+ EDKII_IOMMU_PPI_REVISION,
+ PeiIoMmuSetAttribute,
+ PeiIoMmuMap,
+ PeiIoMmuUnmap,
+ PeiIoMmuAllocateBuffer,
+ PeiIoMmuFreeBuffer,
+};
+
+CONST EFI_PEI_PPI_DESCRIPTOR mIoMmuPpiList = {
+ EFI_PEI_PPI_DESCRIPTOR_PPI | EFI_PEI_PPI_DESCRIPTOR_TERMINATE_LIST,
+ &gEdkiiIoMmuPpiGuid,
+ (VOID *) &mIoMmuPpi
+};
+
+/**
+ Release the momery in the Intel VTd Info
+
+ @param[in] VTdInfo The VTd engine context information.
+**/
+VOID
+ReleaseVTdInfo (
+ IN VTD_INFO *VTdInfo
+ )
+{
+ UINTN Index;
+
+ for (Index = 0; Index < VTdInfo->VTdEngineCount; Index++) {
+ DEBUG ((DEBUG_INFO, "Release momery in VTdInfo[%d]\n", Index));
+
+ if (VTdInfo->VtdUnitInfo[Index].FixedSecondLevelPagingEntry) {
+ FreePages ((VOID *) (UINTN) VTdInfo->VtdUnitInfo[Index].FixedSecondLevelPagingEntry, 1);
+ VTdInfo->VtdUnitInfo[Index].FixedSecondLevelPagingEntry = 0;
+ }
+
+ if (VTdInfo->VtdUnitInfo[Index].RmrrSecondLevelPagingEntry) {
+ FreePages ((VOID *) (UINTN) VTdInfo->VtdUnitInfo[Index].RmrrSecondLevelPagingEntry, 1);
+ VTdInfo->VtdUnitInfo[Index].RmrrSecondLevelPagingEntry = 0;
+ }
+
+ if (VTdInfo->VtdUnitInfo[Index].RootEntryTable) {
+ FreePages ((VOID *) (UINTN) VTdInfo->VtdUnitInfo[Index].RootEntryTable, VTdInfo->VtdUnitInfo[Index].RootEntryTablePageSize);
+ VTdInfo->VtdUnitInfo[Index].RootEntryTable = 0;
+ }
+
+ if (VTdInfo->VtdUnitInfo[Index].ExtRootEntryTable) {
+ FreePages ((VOID *) (UINTN) VTdInfo->VtdUnitInfo[Index].ExtRootEntryTable, VTdInfo->VtdUnitInfo[Index].ExtRootEntryTablePageSize);
+ VTdInfo->VtdUnitInfo[Index].RootEntryTable = 0;
+ }
+
+ if (VTdInfo->VtdUnitInfo[Index].PciDeviceInfo.PciDeviceData) {
+ FreePages ((VOID *) (UINTN) VTdInfo->VtdUnitInfo[Index].PciDeviceInfo.PciDeviceData, VTdInfo->VtdUnitInfo[Index].PciDeviceInfo.PciDeviceDataPageSize);
+ VTdInfo->VtdUnitInfo[Index].PciDeviceInfo.PciDeviceDataPageSize = 0;
+ VTdInfo->VtdUnitInfo[Index].PciDeviceInfo.PciDeviceData = 0;
+ VTdInfo->VtdUnitInfo[Index].PciDeviceInfo.PciDeviceDataNumber = 0;
+ VTdInfo->VtdUnitInfo[Index].PciDeviceInfo.PciDeviceDataMaxNumber = 0;
+ }
+ }
+}
+
+/**
+ Initializes the Intel VTd Info.
+
+ @retval EFI_SUCCESS Usb bot driver is successfully initialized.
+ @retval EFI_OUT_OF_RESOURCES Can't initialize the driver.
+**/
+EFI_STATUS
+InitVTdInfo (
+ VOID
+ )
+{
+ EFI_STATUS Status;
+ EFI_ACPI_DMAR_HEADER *AcpiDmarTable;
+ VOID *Hob;
+ VTD_INFO *VTdInfo;
+ UINT64 EngineMask;
+
+ Status = PeiServicesLocatePpi (
+ &gEdkiiVTdInfoPpiGuid,
+ 0,
+ NULL,
+ (VOID **)&AcpiDmarTable
+ );
+ ASSERT_EFI_ERROR (Status);
+
+ DumpAcpiDMAR (AcpiDmarTable);
+
+ //
+ // Clear old VTdInfo Hob.
+ //
+ Hob = GetFirstGuidHob (&mVTdInfoGuid);
+ if (Hob != NULL) {
+ DEBUG ((DEBUG_INFO, " Find Hob : mVTdInfoGuid - 0x%x\n", Hob));
+
+ VTdInfo = GET_GUID_HOB_DATA(Hob);
+ EngineMask = LShiftU64 (1, VTdInfo->VTdEngineCount) - 1;
+ EnableVTdTranslationProtectionAll (VTdInfo, EngineMask);
+
+ ReleaseVTdInfo (VTdInfo);
+ VTdInfo->VTdEngineCount = 0;
+
+ ZeroMem (&((EFI_HOB_GUID_TYPE *) Hob)->Name, sizeof (EFI_GUID));
+ }
+
+ //
+ // Get DMAR information to local VTdInfo
+ //
+ Status = ParseDmarAcpiTableDrhd (AcpiDmarTable);
+ if (EFI_ERROR(Status)) {
+ DEBUG ((DEBUG_ERROR, " ParseDmarAcpiTableDrhd : %r\n", Status));
+ return Status;
+ }
+
+ //
+ // NOTE: Do not parse RMRR here, because RMRR may cause DMAR programming.
+ //
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Initializes the Intel VTd DMAR for all memory.
+
+ @retval EFI_SUCCESS Driver is successfully initialized.
+ @retval RETURN_NOT_READY Fail to get VTdInfo Hob .
+**/
+EFI_STATUS
+InitVTdDmarForAll (
+ VOID
+ )
+{
+ VOID *Hob;
+ VTD_INFO *VTdInfo;
+ UINT64 EngineMask;
+
+ Hob = GetFirstGuidHob (&mVTdInfoGuid);
+ if (Hob == NULL) {
+ DEBUG ((DEBUG_ERROR, "Fail to get VTdInfo Hob.\n"));
+ return RETURN_NOT_READY;
+ }
+ VTdInfo = GET_GUID_HOB_DATA (Hob);
+ EngineMask = LShiftU64 (1, VTdInfo->VTdEngineCount) - 1;
+
+ EnableVTdTranslationProtectionAll (VTdInfo, EngineMask);
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Initializes DMA buffer
+
+ @retval EFI_SUCCESS DMA buffer is successfully initialized.
+ @retval EFI_INVALID_PARAMETER Invalid DMA buffer size.
+ @retval EFI_OUT_OF_RESOURCES Can't initialize DMA buffer.
+**/
+EFI_STATUS
+InitDmaBuffer(
+ VOID
+ )
+{
+ DMA_BUFFER_INFO *DmaBufferInfo;
+ VOID *Hob;
+ VOID *VtdPmrHobPtr;
+ VTD_PMR_INFO_HOB *VtdPmrHob;
+
+ DEBUG ((DEBUG_INFO, "InitDmaBuffer :\n"));
+
+ Hob = GetFirstGuidHob (&mDmaBufferInfoGuid);
+ DmaBufferInfo = GET_GUID_HOB_DATA (Hob);
+ VtdPmrHobPtr = GetFirstGuidHob (&gVtdPmrInfoDataHobGuid);
+
+ /**
+ When gVtdPmrInfoDataHobGuid exists, it means:
+ 1. Dma buffer is reserved by memory initialize code
+ 2. PeiGetVtdPmrAlignmentLib is used to get alignment
+ 3. Protection regions are determined by the system memory map
+ 4. Protection regions will be conveyed through VTD_PMR_INFO_HOB
+
+ When gVtdPmrInfoDataHobGuid dosen't exist, it means:
+ 1. IntelVTdDmar driver will calcuate the PMR memory alignment
+ 2. Dma buffer is reserved by AllocateAlignedPages()
+ **/
+
+
+ if (DmaBufferInfo->DmaBufferSize == 0) {
+ DEBUG ((DEBUG_INFO, " DmaBufferSize is 0\n"));
+ return EFI_INVALID_PARAMETER;
+ }
+
+ if (VtdPmrHobPtr == NULL) {
+ //
+ // Allocate memory for DMA buffer
+ //
+ DmaBufferInfo->DmaBufferBase = (UINT64) (UINTN) AllocateAlignedPages (EFI_SIZE_TO_PAGES ((UINTN) DmaBufferInfo->DmaBufferSize), 0);
+ if (DmaBufferInfo->DmaBufferBase == 0) {
+ DEBUG ((DEBUG_ERROR, " InitDmaBuffer : OutOfResource\n"));
+ return EFI_OUT_OF_RESOURCES;
+ }
+ DmaBufferInfo->DmaBufferLimit = DmaBufferInfo->DmaBufferBase + DmaBufferInfo->DmaBufferSize;
+ DEBUG ((DEBUG_INFO, "Alloc DMA buffer success.\n"));
+ } else {
+ //
+ // Get the PMR ranges information for the VTd PMR hob
+ //
+ VtdPmrHob = GET_GUID_HOB_DATA (VtdPmrHobPtr);
+ DmaBufferInfo->DmaBufferBase = VtdPmrHob->ProtectedLowLimit;
+ DmaBufferInfo->DmaBufferLimit = VtdPmrHob->ProtectedHighBase;
+ }
+ DmaBufferInfo->DmaBufferCurrentTop = DmaBufferInfo->DmaBufferBase + DmaBufferInfo->DmaBufferSize;
+ DmaBufferInfo->DmaBufferCurrentBottom = DmaBufferInfo->DmaBufferBase;
+
+ DEBUG ((DEBUG_INFO, " DmaBufferSize : 0x%lx\n", DmaBufferInfo->DmaBufferSize));
+ DEBUG ((DEBUG_INFO, " DmaBufferBase : 0x%lx\n", DmaBufferInfo->DmaBufferBase));
+ DEBUG ((DEBUG_INFO, " DmaBufferLimit : 0x%lx\n", DmaBufferInfo->DmaBufferLimit));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentTop : 0x%lx\n", DmaBufferInfo->DmaBufferCurrentTop));
+ DEBUG ((DEBUG_INFO, " DmaBufferCurrentBottom : 0x%lx\n", DmaBufferInfo->DmaBufferCurrentBottom));
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Initializes the Intel VTd DMAR for DMA buffer.
+
+ @retval EFI_SUCCESS Usb bot driver is successfully initialized.
+ @retval EFI_OUT_OF_RESOURCES Can't initialize the driver.
+ @retval EFI_DEVICE_ERROR DMAR translation is not enabled.
+**/
+EFI_STATUS
+InitVTdDmarForDma (
+ VOID
+ )
+{
+ VOID *Hob;
+ VTD_INFO *VTdInfo;
+ EFI_STATUS Status;
+ EFI_PEI_PPI_DESCRIPTOR *OldDescriptor;
+ EDKII_IOMMU_PPI *OldIoMmuPpi;
+
+ Hob = GetFirstGuidHob (&mVTdInfoGuid);
+ VTdInfo = GET_GUID_HOB_DATA (Hob);
+
+ DEBUG ((DEBUG_INFO, "PrepareVtdConfig\n"));
+ Status = PrepareVtdConfig (VTdInfo);
+ if (EFI_ERROR (Status)) {
+ ASSERT_EFI_ERROR (Status);
+ return Status;
+ }
+
+ // create root entry table
+ DEBUG ((DEBUG_INFO, "SetupTranslationTable\n"));
+ Status = SetupTranslationTable (VTdInfo);
+ if (EFI_ERROR (Status)) {
+ ASSERT_EFI_ERROR (Status);
+ return Status;
+ }
+
+ // If there is RMRR memory, parse it here.
+ DEBUG ((DEBUG_INFO, "PeiParseDmarAcpiTableRmrr\n"));
+ ParseDmarAcpiTableRmrr (VTdInfo);
+
+ DEBUG ((DEBUG_INFO, "EnableVtdDmar\n"));
+ Status = EnableVTdTranslationProtection(VTdInfo);
+ if (EFI_ERROR (Status)) {
+ return Status;
+ }
+
+ DEBUG ((DEBUG_INFO, "Install gEdkiiIoMmuPpiGuid\n"));
+ // install protocol
+ //
+ // (Re)Install PPI.
+ //
+ Status = PeiServicesLocatePpi (
+ &gEdkiiIoMmuPpiGuid,
+ 0,
+ &OldDescriptor,
+ (VOID **) &OldIoMmuPpi
+ );
+ if (!EFI_ERROR (Status)) {
+ Status = PeiServicesReInstallPpi (OldDescriptor, &mIoMmuPpiList);
+ } else {
+ Status = PeiServicesInstallPpi (&mIoMmuPpiList);
+ }
+ ASSERT_EFI_ERROR (Status);
+
+ return Status;
+}
+
+/**
+ This function handles S3 resume task at the end of PEI
+
+ @param[in] PeiServices Pointer to PEI Services Table.
+ @param[in] NotifyDesc Pointer to the descriptor for the Notification event that
+ caused this function to execute.
+ @param[in] Ppi Pointer to the PPI data associated with this function.
+
+ @retval EFI_STATUS Always return EFI_SUCCESS
+**/
+EFI_STATUS
+EFIAPI
+S3EndOfPeiNotify(
+ IN EFI_PEI_SERVICES **PeiServices,
+ IN EFI_PEI_NOTIFY_DESCRIPTOR *NotifyDesc,
+ IN VOID *Ppi
+ )
+{
+ VOID *Hob;
+ VTD_INFO *VTdInfo;
+ UINT64 EngineMask;
+
+ DEBUG((DEBUG_INFO, "VTd DMAR PEI S3EndOfPeiNotify\n"));
+
+ if ((PcdGet8(PcdVTdPolicyPropertyMask) & BIT1) == 0) {
+ Hob = GetFirstGuidHob (&mVTdInfoGuid);
+ if (Hob == NULL) {
+ return EFI_SUCCESS;
+ }
+ VTdInfo = GET_GUID_HOB_DATA(Hob);
+
+ EngineMask = LShiftU64 (1, VTdInfo->VTdEngineCount) - 1;
+ DisableVTdTranslationProtection (VTdInfo, EngineMask);
+ }
+ return EFI_SUCCESS;
+}
+
+EFI_PEI_NOTIFY_DESCRIPTOR mS3EndOfPeiNotifyDesc = {
+ (EFI_PEI_PPI_DESCRIPTOR_NOTIFY_CALLBACK | EFI_PEI_PPI_DESCRIPTOR_TERMINATE_LIST),
+ &gEfiEndOfPeiSignalPpiGuid,
+ S3EndOfPeiNotify
+};
+
+/**
+ This function handles VTd engine setup
+
+ @param[in] PeiServices Pointer to PEI Services Table.
+ @param[in] NotifyDesc Pointer to the descriptor for the Notification event that
+ caused this function to execute.
+ @param[in] Ppi Pointer to the PPI data associated with this function.
+
+ @retval EFI_STATUS Always return EFI_SUCCESS
+**/
+EFI_STATUS
+EFIAPI
+VTdInfoNotify (
+ IN EFI_PEI_SERVICES **PeiServices,
+ IN EFI_PEI_NOTIFY_DESCRIPTOR *NotifyDesc,
+ IN VOID *Ppi
+ )
+{
+ EFI_STATUS Status;
+ VOID *MemoryDiscovered;
+ BOOLEAN MemoryInitialized;
+
+ DEBUG ((DEBUG_INFO, "VTdInfoNotify\n"));
+
+ //
+ // Check if memory is initialized.
+ //
+ MemoryInitialized = FALSE;
+ Status = PeiServicesLocatePpi (
+ &gEfiPeiMemoryDiscoveredPpiGuid,
+ 0,
+ NULL,
+ &MemoryDiscovered
+ );
+ if (!EFI_ERROR(Status)) {
+ MemoryInitialized = TRUE;
+ }
+
+ DEBUG ((DEBUG_INFO, "MemoryInitialized - %x\n", MemoryInitialized));
+
+ //
+ // NOTE: We need reinit VTdInfo because previous information might be overriden.
+ //
+ InitVTdInfo ();
+
+ if (!MemoryInitialized) {
+ //
+ // If the memory is not initialized,
+ // Protect all system memory
+ //
+
+ InitVTdDmarForAll ();
+
+ //
+ // Install PPI.
+ //
+ Status = PeiServicesInstallPpi (&mIoMmuPpiList);
+ ASSERT_EFI_ERROR(Status);
+ } else {
+ //
+ // If the memory is initialized,
+ // Allocate DMA buffer and protect rest system memory
+ //
+
+ Status = InitDmaBuffer ();
+ ASSERT_EFI_ERROR(Status);
+
+ InitVTdDmarForDma ();
+ }
+
+ return EFI_SUCCESS;
+}
+
+EFI_PEI_NOTIFY_DESCRIPTOR mVTdInfoNotifyDesc = {
+ (EFI_PEI_PPI_DESCRIPTOR_NOTIFY_CALLBACK | EFI_PEI_PPI_DESCRIPTOR_TERMINATE_LIST),
+ &gEdkiiVTdInfoPpiGuid,
+ VTdInfoNotify
+};
+
+/**
+ Initializes the Intel VTd DMAR PEIM.
+
+ @param[in] FileHandle Handle of the file being invoked.
+ @param[in] PeiServices Describes the list of possible PEI Services.
+
+ @retval EFI_SUCCESS Usb bot driver is successfully initialized.
+ @retval EFI_OUT_OF_RESOURCES Can't initialize the driver.
+**/
+EFI_STATUS
+EFIAPI
+IntelVTdDmarInitialize (
+ IN EFI_PEI_FILE_HANDLE FileHandle,
+ IN CONST EFI_PEI_SERVICES **PeiServices
+ )
+{
+ EFI_STATUS Status;
+ EFI_BOOT_MODE BootMode;
+ DMA_BUFFER_INFO *DmaBufferInfo;
+
+ DEBUG ((DEBUG_INFO, "IntelVTdDmarInitialize\n"));
+
+ if ((PcdGet8(PcdVTdPolicyPropertyMask) & BIT0) == 0) {
+ return EFI_UNSUPPORTED;
+ }
+
+ DmaBufferInfo = BuildGuidHob (&mDmaBufferInfoGuid, sizeof (DMA_BUFFER_INFO));
+ ASSERT(DmaBufferInfo != NULL);
+ if (DmaBufferInfo == NULL) {
+ return EFI_OUT_OF_RESOURCES;
+ }
+ ZeroMem (DmaBufferInfo, sizeof (DMA_BUFFER_INFO));
+
+ PeiServicesGetBootMode (&BootMode);
+
+ if (BootMode == BOOT_ON_S3_RESUME) {
+ DmaBufferInfo->DmaBufferSize = PcdGet32 (PcdVTdPeiDmaBufferSizeS3);
+ } else {
+ DmaBufferInfo->DmaBufferSize = PcdGet32 (PcdVTdPeiDmaBufferSize);
+ }
+
+ Status = PeiServicesNotifyPpi (&mVTdInfoNotifyDesc);
+ ASSERT_EFI_ERROR (Status);
+
+ //
+ // Register EndOfPei Notify for S3
+ //
+ if (BootMode == BOOT_ON_S3_RESUME) {
+ Status = PeiServicesNotifyPpi (&mS3EndOfPeiNotifyDesc);
+ ASSERT_EFI_ERROR (Status);
+ }
+
+ return EFI_SUCCESS;
+}
+
diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.h b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.h
new file mode 100644
index 00000000..a3bb8827
--- /dev/null
+++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.h
@@ -0,0 +1,224 @@
+/** @file
+ The definition for DMA access Library.
+
+ Copyright (c) 2020, Intel Corporation. All rights reserved.<BR>
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#ifndef __DMA_ACCESS_LIB_H__
+#define __DMA_ACCESS_LIB_H__
+
+#define MAX_VTD_PCI_DATA_NUMBER 0x100
+
+typedef struct {
+ UINT8 DeviceType;
+ VTD_SOURCE_ID PciSourceId;
+} PEI_PCI_DEVICE_DATA;
+
+typedef struct {
+ BOOLEAN IncludeAllFlag;
+ UINT32 PciDeviceDataNumber;
+ UINT32 PciDeviceDataMaxNumber;
+ UINT32 PciDeviceDataPageSize;
+ UINT32 PciDeviceData;
+} PEI_PCI_DEVICE_INFORMATION;
+
+typedef struct {
+ UINT32 VtdUnitBaseAddress;
+ UINT16 Segment;
+ VTD_CAP_REG CapReg;
+ VTD_ECAP_REG ECapReg;
+ BOOLEAN Is5LevelPaging;
+ UINT32 FixedSecondLevelPagingEntry;
+ UINT32 RmrrSecondLevelPagingEntry;
+ UINT32 RootEntryTable;
+ UINT32 ExtRootEntryTable;
+ UINT16 RootEntryTablePageSize;
+ UINT16 ExtRootEntryTablePageSize;
+ PEI_PCI_DEVICE_INFORMATION PciDeviceInfo;
+} VTD_UNIT_INFO;
+
+typedef struct {
+ UINT32 AcpiDmarTable;
+ UINT8 HostAddressWidth;
+ UINT32 VTdEngineCount;
+ VTD_UNIT_INFO VtdUnitInfo[1];
+} VTD_INFO;
+
+typedef struct {
+ UINT64 DmaBufferBase;
+ UINT64 DmaBufferSize;
+ UINT64 DmaBufferLimit;
+ UINT64 DmaBufferCurrentTop;
+ UINT64 DmaBufferCurrentBottom;
+} DMA_BUFFER_INFO;
+
+/**
+ Enable VTd translation table protection.
+
+ @param[in] VTdInfo The VTd engine context information.
+ @param[in] EngineMask The mask of the VTd engine to be accessed.
+**/
+VOID
+EnableVTdTranslationProtectionAll (
+ IN VTD_INFO *VTdInfo,
+ IN UINT64 EngineMask
+ );
+
+/**
+ Enable VTd translation table protection.
+
+ @param[in] VTdInfo The VTd engine context information.
+
+ @retval EFI_SUCCESS DMAR translation is enabled.
+ @retval EFI_DEVICE_ERROR DMAR translation is not enabled.
+**/
+EFI_STATUS
+EnableVTdTranslationProtection (
+ IN VTD_INFO *VTdInfo
+ );
+
+/**
+ Disable VTd translation table protection.
+
+ @param[in] VTdInfo The VTd engine context information.
+ @param[in] EngineMask The mask of the VTd engine to be accessed.
+**/
+VOID
+DisableVTdTranslationProtection (
+ IN VTD_INFO *VTdInfo,
+ IN UINT64 EngineMask
+ );
+
+/**
+ Parse DMAR DRHD table.
+
+ @param[in] AcpiDmarTable DMAR ACPI table
+
+ @return EFI_SUCCESS The DMAR DRHD table is parsed.
+**/
+EFI_STATUS
+ParseDmarAcpiTableDrhd (
+ IN EFI_ACPI_DMAR_HEADER *AcpiDmarTable
+ );
+
+/**
+ Parse DMAR DRHD table.
+
+ @param[in] VTdInfo The VTd engine context information.
+**/
+VOID
+ParseDmarAcpiTableRmrr (
+ IN VTD_INFO *VTdInfo
+ );
+
+/**
+ Dump DMAR ACPI table.
+
+ @param[in] Dmar DMAR ACPI table
+**/
+VOID
+DumpAcpiDMAR (
+ IN EFI_ACPI_DMAR_HEADER *Dmar
+ );
+
+/**
+ Prepare VTD configuration.
+
+ @param[in] VTdInfo The VTd engine context information.
+
+ @retval EFI_SUCCESS Prepare Vtd config success
+**/
+EFI_STATUS
+PrepareVtdConfig (
+ IN VTD_INFO *VTdInfo
+ );
+
+/**
+ Setup VTd translation table.
+
+ @param[in] VTdInfo The VTd engine context information.
+
+ @retval EFI_SUCCESS Setup translation table successfully.
+ @retval EFI_OUT_OF_RESOURCE Setup translation table fail.
+**/
+EFI_STATUS
+SetupTranslationTable (
+ IN VTD_INFO *VTdInfo
+ );
+
+/**
+ Flush VTD page table and context table memory.
+
+ This action is to make sure the IOMMU engine can get final data in memory.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] Base The base address of memory to be flushed.
+ @param[in] Size The size of memory in bytes to be flushed.
+**/
+VOID
+FlushPageTableMemory (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN UINTN Base,
+ IN UINTN Size
+ );
+
+/**
+ Allocate zero pages.
+
+ @param[in] Pages the number of pages.
+
+ @return the page address.
+ @retval NULL No resource to allocate pages.
+**/
+VOID *
+EFIAPI
+AllocateZeroPages (
+ IN UINTN Pages
+ );
+
+/**
+ Return the index of PCI data.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] Segment The Segment used to identify a VTd engine.
+ @param[in] SourceId The SourceId used to identify a VTd engine and table entry.
+
+ @return The index of the PCI data.
+ @retval (UINTN)-1 The PCI data is not found.
+**/
+UINTN
+GetPciDataIndex (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN UINT16 Segment,
+ IN VTD_SOURCE_ID SourceId
+ );
+
+/**
+ Always enable the VTd page attribute for the device.
+
+ @param[in] VTdInfo The VTd engine context information.
+ @param[in] Segment The Segment used to identify a VTd engine.
+ @param[in] SourceId The SourceId used to identify a VTd engine and table entry.
+ @param[in] MemoryBase The base of the memory.
+ @param[in] MemoryLimit The limit of the memory.
+ @param[in] IoMmuAccess The IOMMU access.
+
+ @retval EFI_SUCCESS The VTd entry is updated to always enable all DMA access for the specific device.
+**/
+EFI_STATUS
+EnableRmrrPageAttribute (
+ IN VTD_INFO *VTdInfo,
+ IN UINT16 Segment,
+ IN VTD_SOURCE_ID SourceId,
+ IN UINT64 MemoryBase,
+ IN UINT64 MemoryLimit,
+ IN UINT64 IoMmuAccess
+ );
+
+extern EFI_GUID mVTdInfoGuid;
+extern EFI_GUID mDmaBufferInfoGuid;
+
+#endif
+
diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.inf b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.inf
new file mode 100644
index 00000000..b97ff900
--- /dev/null
+++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.inf
@@ -0,0 +1,65 @@
+## @file
+# Component INF file for the Intel VTd DMAR PEIM.
+#
+# This driver initializes VTd engine based upon EDKII_VTD_INFO_PPI
+# and provide DMA protection in PEI.
+#
+# Copyright (c) 2020, Intel Corporation. All rights reserved.<BR>
+# SPDX-License-Identifier: BSD-2-Clause-Patent
+#
+##
+
+[Defines]
+ INF_VERSION = 0x00010017
+ BASE_NAME = IntelVTdDmarPei
+ MODULE_UNI_FILE = IntelVTdDmarPei.uni
+ FILE_GUID = 2D586AF2-47C4-47BB-A860-89495D5BBFEB
+ MODULE_TYPE = PEIM
+ VERSION_STRING = 1.0
+ ENTRY_POINT = IntelVTdDmarInitialize
+
+[Packages]
+ MdePkg/MdePkg.dec
+ MdeModulePkg/MdeModulePkg.dec
+ IntelSiliconPkg/IntelSiliconPkg.dec
+
+[Sources]
+ IntelVTdDmarPei.c
+ IntelVTdDmarPei.h
+ IntelVTdDmar.c
+ DmarTable.c
+ TranslationTable.c
+
+[LibraryClasses]
+ DebugLib
+ BaseMemoryLib
+ BaseLib
+ PeimEntryPoint
+ PeiServicesLib
+ HobLib
+ IoLib
+ CacheMaintenanceLib
+ PciSegmentLib
+
+[Guids]
+ gVtdPmrInfoDataHobGuid ## CONSUMES
+
+[Ppis]
+ gEdkiiIoMmuPpiGuid ## PRODUCES
+ gEdkiiVTdInfoPpiGuid ## CONSUMES
+ gEfiPeiMemoryDiscoveredPpiGuid ## CONSUMES
+ gEfiEndOfPeiSignalPpiGuid ## CONSUMES
+ gEdkiiVTdNullRootEntryTableGuid ## CONSUMES
+
+[Pcd]
+ gIntelSiliconPkgTokenSpaceGuid.PcdVTdPolicyPropertyMask ## CONSUMES
+ gIntelSiliconPkgTokenSpaceGuid.PcdVTdPeiDmaBufferSize ## CONSUMES
+ gIntelSiliconPkgTokenSpaceGuid.PcdVTdPeiDmaBufferSizeS3 ## CONSUMES
+
+[Depex]
+ gEfiPeiMasterBootModePpiGuid AND
+ gEdkiiVTdInfoPpiGuid
+
+[UserExtensions.TianoCore."ExtraFiles"]
+ IntelVTdDmarPeiExtra.uni
+
diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.uni b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.uni
new file mode 100644
index 00000000..46025251
--- /dev/null
+++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.uni
@@ -0,0 +1,14 @@
+// /** @file
+// IntelVTdDmarPei Module Localized Abstract and Description Content
+//
+// Copyright (c) 2020, Intel Corporation. All rights reserved.<BR>
+//
+// SPDX-License-Identifier: BSD-2-Clause-Patent
+//
+// **/
+
+
+#string STR_MODULE_ABSTRACT #language en-US "Intel VTd DMAR PEI Driver."
+
+#string STR_MODULE_DESCRIPTION #language en-US "This driver initializes VTd engine based upon EDKII_VTD_INFO_PPI and provide DMA protection to device in PEI."
+
--git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPeiExtra.uni b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPeiExtra.uni
new file mode 100644
index 00000000..f60693d6
--- /dev/null
+++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPeiExtra.uni
@@ -0,0 +1,14 @@
+// /** @file
+// IntelVTdDmarPei Localized Strings and Content
+//
+// Copyright (c) 2020, Intel Corporation. All rights reserved.<BR>
+//
+// SPDX-License-Identifier: BSD-2-Clause-Patent
+//
+// **/
+
+#string STR_PROPERTIES_MODULE_NAME
+#language en-US
+"Intel VTd DMAR PEI Driver"
+
+
diff --git a/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/TranslationTable.c b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/TranslationTable.c
new file mode 100644
index 00000000..d417f5af
--- /dev/null
+++ b/Silicon/Intel/IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/TranslationTable.c
@@ -0,0 +1,1045 @@
+/** @file
+
+ Copyright (c) 2020, Intel Corporation. All rights reserved.<BR>
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#include <Uefi.h>
+#include <PiPei.h>
+#include <Library/BaseLib.h>
+#include <Library/BaseMemoryLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Library/IoLib.h>
+#include <Library/DebugLib.h>
+#include <Library/PeiServicesLib.h>
+#include <Library/HobLib.h>
+#include <IndustryStandard/Vtd.h>
+#include <Ppi/IoMmu.h>
+#include <Ppi/VtdInfo.h>
+#include <Ppi/MemoryDiscovered.h>
+#include <Ppi/EndOfPeiPhase.h>
+#include <Guid/VtdPmrInfoHob.h>
+#include <Library/CacheMaintenanceLib.h>
+#include "IntelVTdDmarPei.h"
+
+#define ALIGN_VALUE_UP(Value, Alignment) (((Value) + (Alignment) - 1) & (~((Alignment) - 1)))
+#define ALIGN_VALUE_LOW(Value, Alignment) ((Value) & (~((Alignment) - 1)))
+
+#define VTD_64BITS_ADDRESS(Lo, Hi) (LShiftU64 (Lo, 12) | LShiftU64 (Hi, 32))
+
+/**
+ Allocate zero pages.
+
+ @param[in] Pages the number of pages.
+
+ @return the page address.
+ @retval NULL No resource to allocate pages.
+**/
+VOID *
+EFIAPI
+AllocateZeroPages (
+ IN UINTN Pages
+ )
+{
+ VOID *Addr;
+
+ Addr = AllocatePages (Pages);
+ if (Addr == NULL) {
+ return NULL;
+ }
+ ZeroMem (Addr, EFI_PAGES_TO_SIZE (Pages));
+ return Addr;
+}
+
+/**
+ Set second level paging entry attribute based upon IoMmuAccess.
+
+ @param[in] PtEntry The paging entry.
+ @param[in] IoMmuAccess The IOMMU access.
+**/
+VOID
+SetSecondLevelPagingEntryAttribute (
+ IN VTD_SECOND_LEVEL_PAGING_ENTRY *PtEntry,
+ IN UINT64 IoMmuAccess
+ )
+{
+ PtEntry->Bits.Read = ((IoMmuAccess & EDKII_IOMMU_ACCESS_READ) != 0);
+ PtEntry->Bits.Write = ((IoMmuAccess & EDKII_IOMMU_ACCESS_WRITE) != 0);
+ DEBUG ((DEBUG_VERBOSE, "SetSecondLevelPagingEntryAttribute - 0x%x - 0x%x\n", PtEntry, IoMmuAccess));
+}
+
+/**
+ Create second level paging entry table.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] SecondLevelPagingEntry The second level paging entry.
+ @param[in] MemoryBase The base of the memory.
+ @param[in] MemoryLimit The limit of the memory.
+ @param[in] IoMmuAccess The IOMMU access.
+
+ @return The second level paging entry.
+**/
+VTD_SECOND_LEVEL_PAGING_ENTRY *
+CreateSecondLevelPagingEntryTable (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry,
+ IN UINT64 MemoryBase,
+ IN UINT64 MemoryLimit,
+ IN UINT64 IoMmuAccess
+ )
+{
+ UINTN Index5;
+ UINTN Index4;
+ UINTN Index3;
+ UINTN Index2;
+ UINTN Lvl5Start;
+ UINTN Lvl5End;
+ UINTN Lvl4PagesStart;
+ UINTN Lvl4PagesEnd;
+ UINTN Lvl4Start;
+ UINTN Lvl4End;
+ UINTN Lvl3Start;
+ UINTN Lvl3End;
+ VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl5PtEntry;
+ VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl4PtEntry;
+ VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl3PtEntry;
+ VTD_SECOND_LEVEL_PAGING_ENTRY *Lvl2PtEntry;
+ UINT64 BaseAddress;
+ UINT64 EndAddress;
+ BOOLEAN Is5LevelPaging;
+
+ if (MemoryLimit == 0) {
+ return EFI_SUCCESS;
+ }
+
+ BaseAddress = ALIGN_VALUE_LOW (MemoryBase, SIZE_2MB);
+ EndAddress = ALIGN_VALUE_UP (MemoryLimit, SIZE_2MB);
+ DEBUG ((DEBUG_INFO, "CreateSecondLevelPagingEntryTable: BaseAddress - 0x%016lx, EndAddress - 0x%016lx\n", BaseAddress, EndAddress));
+
+ if (SecondLevelPagingEntry == NULL) {
+ SecondLevelPagingEntry = AllocateZeroPages (1);
+ if (SecondLevelPagingEntry == NULL) {
+ DEBUG ((DEBUG_ERROR, "Could not Alloc LVL4 or LVL5 PT. \n"));
+ return NULL;
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) SecondLevelPagingEntry, EFI_PAGES_TO_SIZE (1));
+ }
+
+ DEBUG ((DEBUG_INFO, " SecondLevelPagingEntry:0x%016lx\n", SecondLevelPagingEntry));
+ //
+ // If no access is needed, just create not present entry.
+ //
+ if (IoMmuAccess == 0) {
+ DEBUG ((DEBUG_INFO, " SecondLevelPagingEntry:0x%016lx\n", (UINTN) SecondLevelPagingEntry));
+ return SecondLevelPagingEntry;
+ }
+
+ Is5LevelPaging = VTdUnitInfo->Is5LevelPaging;
+
+ if (Is5LevelPaging) {
+ Lvl5Start = RShiftU64 (BaseAddress, 48) & 0x1FF;
+ Lvl5End = RShiftU64 (EndAddress - 1, 48) & 0x1FF;
+ DEBUG ((DEBUG_INFO, " Lvl5Start - 0x%x, Lvl5End - 0x%x\n", Lvl5Start, Lvl5End));
+
+ Lvl4Start = RShiftU64 (BaseAddress, 39) & 0x1FF;
+ Lvl4End = RShiftU64 (EndAddress - 1, 39) & 0x1FF;
+
+ Lvl4PagesStart = (Lvl5Start<<9) | Lvl4Start;
+ Lvl4PagesEnd = (Lvl5End<<9) | Lvl4End;
+ DEBUG ((DEBUG_INFO, " Lvl4PagesStart - 0x%x, Lvl4PagesEnd - 0x%x\n", Lvl4PagesStart, Lvl4PagesEnd));
+
+ Lvl5PtEntry = (VTD_SECOND_LEVEL_PAGING_ENTRY *) SecondLevelPagingEntry;
+ } else {
+ Lvl5Start = RShiftU64 (BaseAddress, 48) & 0x1FF;
+ Lvl5End = Lvl5Start;
+
+ Lvl4Start = RShiftU64 (BaseAddress, 39) & 0x1FF;
+ Lvl4End = RShiftU64 (EndAddress - 1, 39) & 0x1FF;
+ DEBUG ((DEBUG_INFO, " Lvl4Start - 0x%x, Lvl4End - 0x%x\n", Lvl4Start, Lvl4End));
+
+ Lvl4PtEntry = (VTD_SECOND_LEVEL_PAGING_ENTRY *) SecondLevelPagingEntry;
+ }
+
+ for (Index5 = Lvl5Start; Index5 <= Lvl5End; Index5++) {
+ if (Is5LevelPaging) {
+ if (Lvl5PtEntry[Index5].Uint64 == 0) {
+ Lvl5PtEntry[Index5].Uint64 = (UINT64) (UINTN) AllocateZeroPages (1);
+ if (Lvl5PtEntry[Index5].Uint64 == 0) {
+ DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL4 PAGE FAIL (0x%x)!!!!!!\n", Index5));
+ ASSERT (FALSE);
+ return NULL;
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) Lvl5PtEntry[Index5].Uint64, SIZE_4KB);
+ SetSecondLevelPagingEntryAttribute (&Lvl5PtEntry[Index5], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE);
+ }
+ Lvl4Start = Lvl4PagesStart & 0x1FF;
+ if (((Index5+1)<<9) > Lvl4PagesEnd) {
+ Lvl4End = SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGING_ENTRY) - 1;;
+ Lvl4PagesStart = (Index5+1)<<9;
+ } else {
+ Lvl4End = Lvl4PagesEnd & 0x1FF;
+ }
+ DEBUG ((DEBUG_INFO, " Lvl5(0x%x): Lvl4Start - 0x%x, Lvl4End - 0x%x\n", Index5, Lvl4Start, Lvl4End));
+ Lvl4PtEntry = (VTD_SECOND_LEVEL_PAGING_ENTRY *) (UINTN) VTD_64BITS_ADDRESS(Lvl5PtEntry[Index5].Bits.AddressLo, Lvl5PtEntry[Index5].Bits.AddressHi);
+ }
+
+ for (Index4 = Lvl4Start; Index4 <= Lvl4End; Index4++) {
+ if (Lvl4PtEntry[Index4].Uint64 == 0) {
+ Lvl4PtEntry[Index4].Uint64 = (UINT64) (UINTN) AllocateZeroPages (1);
+ if (Lvl4PtEntry[Index4].Uint64 == 0) {
+ DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL4 PAGE FAIL (0x%x)!!!!!!\n", Index4));
+ ASSERT(FALSE);
+ return NULL;
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) Lvl4PtEntry[Index4].Uint64, SIZE_4KB);
+ SetSecondLevelPagingEntryAttribute (&Lvl4PtEntry[Index4], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE);
+ }
+
+ Lvl3Start = RShiftU64 (BaseAddress, 30) & 0x1FF;
+ if (ALIGN_VALUE_LOW(BaseAddress + SIZE_1GB, SIZE_1GB) <= EndAddress) {
+ Lvl3End = SIZE_4KB / sizeof (VTD_SECOND_LEVEL_PAGING_ENTRY) - 1;
+ } else {
+ Lvl3End = RShiftU64 (EndAddress - 1, 30) & 0x1FF;
+ }
+ DEBUG ((DEBUG_INFO, " Lvl4(0x%x): Lvl3Start - 0x%x, Lvl3End - 0x%x\n", Index4, Lvl3Start, Lvl3End));
+
+ Lvl3PtEntry = (VTD_SECOND_LEVEL_PAGING_ENTRY *) (UINTN) VTD_64BITS_ADDRESS(Lvl4PtEntry[Index4].Bits.AddressLo, Lvl4PtEntry[Index4].Bits.AddressHi);
+ for (Index3 = Lvl3Start; Index3 <= Lvl3End; Index3++) {
+ if (Lvl3PtEntry[Index3].Uint64 == 0) {
+ Lvl3PtEntry[Index3].Uint64 = (UINT64) (UINTN) AllocateZeroPages (1);
+ if (Lvl3PtEntry[Index3].Uint64 == 0) {
+ DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL3 PAGE FAIL (0x%x, 0x%x)!!!!!!\n", Index4, Index3));
+ ASSERT(FALSE);
+ return NULL;
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) Lvl3PtEntry[Index3].Uint64, SIZE_4KB);
+ SetSecondLevelPagingEntryAttribute (&Lvl3PtEntry[Index3], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE);
+ }
+
+ Lvl2PtEntry = (VTD_SECOND_LEVEL_PAGING_ENTRY *) (UINTN) VTD_64BITS_ADDRESS(Lvl3PtEntry[Index3].Bits.AddressLo, Lvl3PtEntry[Index3].Bits.AddressHi);
+ for (Index2 = 0; Index2 < SIZE_4KB/sizeof(VTD_SECOND_LEVEL_PAGING_ENTRY); Index2++) {
+ Lvl2PtEntry[Index2].Uint64 = BaseAddress;
+ SetSecondLevelPagingEntryAttribute (&Lvl2PtEntry[Index2], IoMmuAccess);
+ Lvl2PtEntry[Index2].Bits.PageSize = 1;
+ BaseAddress += SIZE_2MB;
+ if (BaseAddress >= MemoryLimit) {
+ break;
+ }
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) Lvl2PtEntry, SIZE_4KB);
+ if (BaseAddress >= MemoryLimit) {
+ break;
+ }
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) &Lvl3PtEntry[Lvl3Start], (UINTN) &Lvl3PtEntry[Lvl3End + 1] - (UINTN) &Lvl3PtEntry[Lvl3Start]);
+ if (BaseAddress >= MemoryLimit) {
+ break;
+ }
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) &Lvl4PtEntry[Lvl4Start], (UINTN) &Lvl4PtEntry[Lvl4End + 1] - (UINTN) &Lvl4PtEntry[Lvl4Start]);
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) &Lvl5PtEntry[Lvl5Start], (UINTN) &Lvl5PtEntry[Lvl5End + 1] - (UINTN) &Lvl5PtEntry[Lvl5Start]);
+
+ DEBUG ((DEBUG_INFO, " SecondLevelPagingEntry:0x%016lx\n", (UINTN)SecondLevelPagingEntry));
+ return SecondLevelPagingEntry;
+}
+
+/**
+ Create context entry.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+
+ @retval EFI_SUCCESS The context entry is created.
+ @retval EFI_OUT_OF_RESOURCE No enough resource to create context entry.
+
+**/
+EFI_STATUS
+CreateContextEntry (
+ IN VTD_UNIT_INFO *VTdUnitInfo
+ )
+{
+ UINTN RootPages;
+ UINTN ContextPages;
+ UINTN EntryTablePages;
+ VOID *Buffer;
+ UINTN RootIndex;
+ UINTN ContextIndex;
+ VTD_ROOT_ENTRY *RootEntryBase;
+ VTD_ROOT_ENTRY *RootEntry;
+ VTD_CONTEXT_ENTRY *ContextEntryTable;
+ VTD_CONTEXT_ENTRY *ContextEntry;
+ VTD_SOURCE_ID SourceId;
+ VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry;
+ UINT64 Pt;
+
+ RootPages = EFI_SIZE_TO_PAGES (sizeof (VTD_ROOT_ENTRY) * VTD_ROOT_ENTRY_NUMBER);
+ ContextPages = EFI_SIZE_TO_PAGES (sizeof (VTD_CONTEXT_ENTRY) * VTD_CONTEXT_ENTRY_NUMBER);
+ EntryTablePages = RootPages + ContextPages * (VTD_ROOT_ENTRY_NUMBER);
+ Buffer = AllocateZeroPages (EntryTablePages);
+ if (Buffer == NULL) {
+ DEBUG ((DEBUG_ERROR, "Could not Alloc Root Entry Table.. \n"));
+ return EFI_OUT_OF_RESOURCES;
+ }
+
+ DEBUG ((DEBUG_ERROR, "RootEntryTable address - 0x%x\n", Buffer));
+ VTdUnitInfo->RootEntryTable = (UINT32) (UINTN) Buffer;
+ VTdUnitInfo->RootEntryTablePageSize = (UINT16) EntryTablePages;
+ RootEntryBase = (VTD_ROOT_ENTRY *) Buffer;
+ Buffer = (UINT8 *) Buffer + EFI_PAGES_TO_SIZE (RootPages);
+
+ if (VTdUnitInfo->FixedSecondLevelPagingEntry == 0) {
+ DEBUG ((DEBUG_ERROR, "FixedSecondLevelPagingEntry is empty\n"));
+ ASSERT(FALSE);
+ }
+
+ for (RootIndex = 0; RootIndex < VTD_ROOT_ENTRY_NUMBER; RootIndex++) {
+ SourceId.Index.RootIndex = (UINT8) RootIndex;
+
+ RootEntry = &RootEntryBase[SourceId.Index.RootIndex];
+ RootEntry->Bits.ContextTablePointerLo = (UINT32) RShiftU64 ((UINT64) (UINTN) Buffer, 12);
+ RootEntry->Bits.ContextTablePointerHi = (UINT32) RShiftU64 ((UINT64) (UINTN) Buffer, 32);
+ RootEntry->Bits.Present = 1;
+ Buffer = (UINT8 *)Buffer + EFI_PAGES_TO_SIZE (ContextPages);
+ ContextEntryTable = (VTD_CONTEXT_ENTRY *) (UINTN) VTD_64BITS_ADDRESS(RootEntry->Bits.ContextTablePointerLo, RootEntry->Bits.ContextTablePointerHi) ;
+
+ for (ContextIndex = 0; ContextIndex < VTD_CONTEXT_ENTRY_NUMBER; ContextIndex++) {
+ SourceId.Index.ContextIndex = (UINT8) ContextIndex;
+ ContextEntry = &ContextEntryTable[SourceId.Index.ContextIndex];
+
+ ContextEntry->Bits.TranslationType = 0;
+ ContextEntry->Bits.FaultProcessingDisable = 0;
+ ContextEntry->Bits.Present = 0;
+
+ ContextEntry->Bits.AddressWidth = VTdUnitInfo->Is5LevelPaging ? 0x3 : 0x2;
+
+ if (VTdUnitInfo->FixedSecondLevelPagingEntry != 0) {
+ SecondLevelPagingEntry = (VTD_SECOND_LEVEL_PAGING_ENTRY *) (UINTN) VTdUnitInfo->FixedSecondLevelPagingEntry;
+ Pt = (UINT64)RShiftU64 ((UINT64) (UINTN) SecondLevelPagingEntry, 12);
+ ContextEntry->Bits.SecondLevelPageTranslationPointerLo = (UINT32) Pt;
+ ContextEntry->Bits.SecondLevelPageTranslationPointerHi = (UINT32) RShiftU64(Pt, 20);
+ ContextEntry->Bits.DomainIdentifier = ((1 << (UINT8)((UINTN)VTdUnitInfo->CapReg.Bits.ND * 2 + 4)) - 1);
+ ContextEntry->Bits.Present = 1;
+ }
+ }
+ }
+
+ FlushPageTableMemory (VTdUnitInfo, VTdUnitInfo->RootEntryTable, EFI_PAGES_TO_SIZE(EntryTablePages));
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Create extended context entry.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+
+ @retval EFI_SUCCESS The extended context entry is created.
+ @retval EFI_OUT_OF_RESOURCE No enough resource to create extended context entry.
+**/
+EFI_STATUS
+CreateExtContextEntry (
+ IN VTD_UNIT_INFO *VTdUnitInfo
+ )
+{
+ UINTN RootPages;
+ UINTN ContextPages;
+ UINTN EntryTablePages;
+ VOID *Buffer;
+ UINTN RootIndex;
+ UINTN ContextIndex;
+ VTD_EXT_ROOT_ENTRY *ExtRootEntryBase;
+ VTD_EXT_ROOT_ENTRY *ExtRootEntry;
+ VTD_EXT_CONTEXT_ENTRY *ExtContextEntryTable;
+ VTD_EXT_CONTEXT_ENTRY *ExtContextEntry;
+ VTD_SOURCE_ID SourceId;
+ VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry;
+ UINT64 Pt;
+
+ RootPages = EFI_SIZE_TO_PAGES (sizeof (VTD_EXT_ROOT_ENTRY) * VTD_ROOT_ENTRY_NUMBER);
+ ContextPages = EFI_SIZE_TO_PAGES (sizeof (VTD_EXT_CONTEXT_ENTRY) * VTD_CONTEXT_ENTRY_NUMBER);
+ EntryTablePages = RootPages + ContextPages * (VTD_ROOT_ENTRY_NUMBER);
+ Buffer = AllocateZeroPages (EntryTablePages);
+ if (Buffer == NULL) {
+ DEBUG ((DEBUG_INFO, "Could not Alloc Root Entry Table !\n"));
+ return EFI_OUT_OF_RESOURCES;
+ }
+
+ DEBUG ((DEBUG_ERROR, "ExtRootEntryTable address - 0x%x\n", Buffer));
+ VTdUnitInfo->ExtRootEntryTable = (UINT32) (UINTN) Buffer;
+ VTdUnitInfo->ExtRootEntryTablePageSize = (UINT16) EntryTablePages;
+ ExtRootEntryBase = (VTD_EXT_ROOT_ENTRY *) Buffer;
+ Buffer = (UINT8 *) Buffer + EFI_PAGES_TO_SIZE (RootPages);
+
+ if (VTdUnitInfo->FixedSecondLevelPagingEntry == 0) {
+ DEBUG ((DEBUG_ERROR, "FixedSecondLevelPagingEntry is empty\n"));
+ ASSERT(FALSE);
+ }
+
+ for (RootIndex = 0; RootIndex < VTD_ROOT_ENTRY_NUMBER; RootIndex++) {
+ SourceId.Index.RootIndex = (UINT8)RootIndex;
+
+ ExtRootEntry = &ExtRootEntryBase[SourceId.Index.RootIndex];
+ ExtRootEntry->Bits.LowerContextTablePointerLo = (UINT32) RShiftU64 ((UINT64) (UINTN) Buffer, 12);
+ ExtRootEntry->Bits.LowerContextTablePointerHi = (UINT32) RShiftU64 ((UINT64) (UINTN) Buffer, 32);
+ ExtRootEntry->Bits.LowerPresent = 1;
+ ExtRootEntry->Bits.UpperContextTablePointerLo = (UINT32) RShiftU64 ((UINT64) (UINTN) Buffer, 12) + 1;
+ ExtRootEntry->Bits.UpperContextTablePointerHi = (UINT32) RShiftU64 (RShiftU64 ((UINT64) (UINTN) Buffer, 12) + 1, 20);
+ ExtRootEntry->Bits.UpperPresent = 1;
+ Buffer = (UINT8 *) Buffer + EFI_PAGES_TO_SIZE (ContextPages);
+ ExtContextEntryTable = (VTD_EXT_CONTEXT_ENTRY *) (UINTN) VTD_64BITS_ADDRESS (ExtRootEntry->Bits.LowerContextTablePointerLo, ExtRootEntry->Bits.LowerContextTablePointerHi) ;
+
+ for (ContextIndex = 0; ContextIndex < VTD_CONTEXT_ENTRY_NUMBER; ContextIndex++) {
+ SourceId.Index.ContextIndex = (UINT8) ContextIndex;
+ ExtContextEntry = &ExtContextEntryTable[SourceId.Index.ContextIndex];
+
+ ExtContextEntry->Bits.TranslationType = 0;
+ ExtContextEntry->Bits.FaultProcessingDisable = 0;
+ ExtContextEntry->Bits.Present = 0;
+
+ ExtContextEntry->Bits.AddressWidth = VTdUnitInfo->Is5LevelPaging ? 0x3 : 0x2;
+
+ if (VTdUnitInfo->FixedSecondLevelPagingEntry != 0) {
+ SecondLevelPagingEntry = (VTD_SECOND_LEVEL_PAGING_ENTRY *) (UINTN) VTdUnitInfo->FixedSecondLevelPagingEntry;
+ Pt = (UINT64)RShiftU64 ((UINT64) (UINTN) SecondLevelPagingEntry, 12);
+
+ ExtContextEntry->Bits.SecondLevelPageTranslationPointerLo = (UINT32) Pt;
+ ExtContextEntry->Bits.SecondLevelPageTranslationPointerHi = (UINT32) RShiftU64(Pt, 20);
+ ExtContextEntry->Bits.DomainIdentifier = ((1 << (UINT8) ((UINTN) VTdUnitInfo->CapReg.Bits.ND * 2 + 4)) - 1);
+ ExtContextEntry->Bits.Present = 1;
+ }
+ }
+ }
+
+ FlushPageTableMemory (VTdUnitInfo, VTdUnitInfo->ExtRootEntryTable, EFI_PAGES_TO_SIZE(EntryTablePages));
+
+ return EFI_SUCCESS;
+}
+
+#define VTD_PG_R BIT0
+#define VTD_PG_W BIT1
+#define VTD_PG_X BIT2
+#define VTD_PG_EMT (BIT3 | BIT4 | BIT5)
+#define VTD_PG_TM (BIT62)
+
+#define VTD_PG_PS BIT7
+
+#define PAGE_PROGATE_BITS (VTD_PG_TM | VTD_PG_EMT | VTD_PG_W | VTD_PG_R)
+
+#define PAGING_4K_MASK 0xFFF
+#define PAGING_2M_MASK 0x1FFFFF
+#define PAGING_1G_MASK 0x3FFFFFFF
+
+#define PAGING_VTD_INDEX_MASK 0x1FF
+
+#define PAGING_4K_ADDRESS_MASK_64 0x000FFFFFFFFFF000ull
+#define PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull
+#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
+
+typedef enum {
+ PageNone,
+ Page4K,
+ Page2M,
+ Page1G,
+} PAGE_ATTRIBUTE;
+
+typedef struct {
+ PAGE_ATTRIBUTE Attribute;
+ UINT64 Length;
+ UINT64 AddressMask;
+} PAGE_ATTRIBUTE_TABLE;
+
+PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
+ {Page4K, SIZE_4KB, PAGING_4K_ADDRESS_MASK_64},
+ {Page2M, SIZE_2MB, PAGING_2M_ADDRESS_MASK_64},
+ {Page1G, SIZE_1GB, PAGING_1G_ADDRESS_MASK_64},
+};
+
+/**
+ Return length according to page attributes.
+
+ @param[in] PageAttributes The page attribute of the page entry.
+
+ @return The length of page entry.
+**/
+UINTN
+PageAttributeToLength (
+ IN PAGE_ATTRIBUTE PageAttribute
+ )
+{
+ UINTN Index;
+ for (Index = 0; Index < sizeof (mPageAttributeTable) / sizeof (mPageAttributeTable[0]); Index++) {
+ if (PageAttribute == mPageAttributeTable[Index].Attribute) {
+ return (UINTN) mPageAttributeTable[Index].Length;
+ }
+ }
+ return 0;
+}
+
+/**
+ Return page table entry to match the address.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] SecondLevelPagingEntry The second level paging entry in VTd table for the device.
+ @param[in] Address The address to be checked.
+ @param[out] PageAttributes The page attribute of the page entry.
+
+ @return The page entry.
+**/
+VOID *
+GetSecondLevelPageTableEntry (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry,
+ IN PHYSICAL_ADDRESS Address,
+ OUT PAGE_ATTRIBUTE *PageAttribute
+ )
+{
+ UINTN Index1;
+ UINTN Index2;
+ UINTN Index3;
+ UINTN Index4;
+ UINTN Index5;
+ UINT64 *L1PageTable;
+ UINT64 *L2PageTable;
+ UINT64 *L3PageTable;
+ UINT64 *L4PageTable;
+ UINT64 *L5PageTable;
+ BOOLEAN Is5LevelPaging;
+
+ Index5 = ((UINTN) RShiftU64 (Address, 48)) & PAGING_VTD_INDEX_MASK;
+ Index4 = ((UINTN) RShiftU64 (Address, 39)) & PAGING_VTD_INDEX_MASK;
+ Index3 = ((UINTN) Address >> 30) & PAGING_VTD_INDEX_MASK;
+ Index2 = ((UINTN) Address >> 21) & PAGING_VTD_INDEX_MASK;
+ Index1 = ((UINTN) Address >> 12) & PAGING_VTD_INDEX_MASK;
+
+ Is5LevelPaging = VTdUnitInfo->Is5LevelPaging;
+
+ if (Is5LevelPaging) {
+ L5PageTable = (UINT64 *) SecondLevelPagingEntry;
+ if (L5PageTable[Index5] == 0) {
+ L5PageTable[Index5] = (UINT64) (UINTN) AllocateZeroPages (1);
+ if (L5PageTable[Index5] == 0) {
+ DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL5 PAGE FAIL (0x%x)!!!!!!\n", Index4));
+ ASSERT(FALSE);
+ *PageAttribute = PageNone;
+ return NULL;
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) L5PageTable[Index5], SIZE_4KB);
+ SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *) &L5PageTable[Index5], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE);
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) &L5PageTable[Index5], sizeof(L5PageTable[Index5]));
+ }
+ L4PageTable = (UINT64 *) (UINTN) (L5PageTable[Index5] & PAGING_4K_ADDRESS_MASK_64);
+ } else {
+ L4PageTable = (UINT64 *)SecondLevelPagingEntry;
+ }
+
+ if (L4PageTable[Index4] == 0) {
+ L4PageTable[Index4] = (UINT64) (UINTN) AllocateZeroPages (1);
+ if (L4PageTable[Index4] == 0) {
+ DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL4 PAGE FAIL (0x%x)!!!!!!\n", Index4));
+ ASSERT(FALSE);
+ *PageAttribute = PageNone;
+ return NULL;
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) L4PageTable[Index4], SIZE_4KB);
+ SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *) &L4PageTable[Index4], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE);
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) &L4PageTable[Index4], sizeof(L4PageTable[Index4]));
+ }
+
+ L3PageTable = (UINT64 *) (UINTN) (L4PageTable[Index4] & PAGING_4K_ADDRESS_MASK_64);
+ if (L3PageTable[Index3] == 0) {
+ L3PageTable[Index3] = (UINT64) (UINTN) AllocateZeroPages (1);
+ if (L3PageTable[Index3] == 0) {
+ DEBUG ((DEBUG_ERROR, "!!!!!! ALLOCATE LVL3 PAGE FAIL (0x%x, 0x%x)!!!!!!\n", Index4, Index3));
+ ASSERT(FALSE);
+ *PageAttribute = PageNone;
+ return NULL;
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) L3PageTable[Index3], SIZE_4KB);
+ SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *) &L3PageTable[Index3], EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE);
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) &L3PageTable[Index3], sizeof (L3PageTable[Index3]));
+ }
+ if ((L3PageTable[Index3] & VTD_PG_PS) != 0) {
+ // 1G
+ *PageAttribute = Page1G;
+ return &L3PageTable[Index3];
+ }
+
+ L2PageTable = (UINT64 *) (UINTN) (L3PageTable[Index3] & PAGING_4K_ADDRESS_MASK_64);
+ if (L2PageTable[Index2] == 0) {
+ L2PageTable[Index2] = Address & PAGING_2M_ADDRESS_MASK_64;
+ SetSecondLevelPagingEntryAttribute ((VTD_SECOND_LEVEL_PAGING_ENTRY *) &L2PageTable[Index2], 0);
+ L2PageTable[Index2] |= VTD_PG_PS;
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) &L2PageTable[Index2], sizeof (L2PageTable[Index2]));
+ }
+ if ((L2PageTable[Index2] & VTD_PG_PS) != 0) {
+ // 2M
+ *PageAttribute = Page2M;
+ return &L2PageTable[Index2];
+ }
+
+ // 4k
+ L1PageTable = (UINT64 *) (UINTN) (L2PageTable[Index2] & PAGING_4K_ADDRESS_MASK_64);
+ if ((L1PageTable[Index1] == 0) && (Address != 0)) {
+ *PageAttribute = PageNone;
+ return NULL;
+ }
+ *PageAttribute = Page4K;
+ return &L1PageTable[Index1];
+}
+
+/**
+ Modify memory attributes of page entry.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] PageEntry The page entry.
+ @param[in] IoMmuAccess The IOMMU access.
+ @param[out] IsModified TRUE means page table modified. FALSE means page table not modified.
+**/
+VOID
+ConvertSecondLevelPageEntryAttribute (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN VTD_SECOND_LEVEL_PAGING_ENTRY *PageEntry,
+ IN UINT64 IoMmuAccess,
+ OUT BOOLEAN *IsModified
+ )
+{
+ UINT64 CurrentPageEntry;
+ UINT64 NewPageEntry;
+
+ CurrentPageEntry = PageEntry->Uint64;
+ SetSecondLevelPagingEntryAttribute (PageEntry, IoMmuAccess);
+ FlushPageTableMemory (VTdUnitInfo, (UINTN) PageEntry, sizeof(*PageEntry));
+ NewPageEntry = PageEntry->Uint64;
+ if (CurrentPageEntry != NewPageEntry) {
+ *IsModified = TRUE;
+ DEBUG ((DEBUG_VERBOSE, "ConvertSecondLevelPageEntryAttribute 0x%lx", CurrentPageEntry));
+ DEBUG ((DEBUG_VERBOSE, "->0x%lx\n", NewPageEntry));
+ } else {
+ *IsModified = FALSE;
+ }
+}
+
+/**
+ This function returns if there is need to split page entry.
+
+ @param[in] BaseAddress The base address to be checked.
+ @param[in] Length The length to be checked.
+ @param[in] PageAttribute The page attribute of the page entry.
+
+ @retval SplitAttributes on if there is need to split page entry.
+**/
+PAGE_ATTRIBUTE
+NeedSplitPage (
+ IN PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN PAGE_ATTRIBUTE PageAttribute
+ )
+{
+ UINT64 PageEntryLength;
+
+ PageEntryLength = PageAttributeToLength (PageAttribute);
+
+ if (((BaseAddress & (PageEntryLength - 1)) == 0) && (Length >= PageEntryLength)) {
+ return PageNone;
+ }
+
+ if (((BaseAddress & PAGING_2M_MASK) != 0) || (Length < SIZE_2MB)) {
+ return Page4K;
+ }
+
+ return Page2M;
+}
+
+/**
+ This function splits one page entry to small page entries.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] PageEntry The page entry to be splitted.
+ @param[in] PageAttribute The page attribute of the page entry.
+ @param[in] SplitAttribute How to split the page entry.
+
+ @retval RETURN_SUCCESS The page entry is splitted.
+ @retval RETURN_UNSUPPORTED The page entry does not support to be splitted.
+ @retval RETURN_OUT_OF_RESOURCES No resource to split page entry.
+**/
+RETURN_STATUS
+SplitSecondLevelPage (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN VTD_SECOND_LEVEL_PAGING_ENTRY *PageEntry,
+ IN PAGE_ATTRIBUTE PageAttribute,
+ IN PAGE_ATTRIBUTE SplitAttribute
+ )
+{
+ UINT64 BaseAddress;
+ UINT64 *NewPageEntry;
+ UINTN Index;
+
+ ASSERT (PageAttribute == Page2M || PageAttribute == Page1G);
+
+ if (PageAttribute == Page2M) {
+ //
+ // Split 2M to 4K
+ //
+ ASSERT (SplitAttribute == Page4K);
+ if (SplitAttribute == Page4K) {
+ NewPageEntry = AllocateZeroPages (1);
+ DEBUG ((DEBUG_INFO, "Split - 0x%x\n", NewPageEntry));
+ if (NewPageEntry == NULL) {
+ return RETURN_OUT_OF_RESOURCES;
+ }
+ BaseAddress = PageEntry->Uint64 & PAGING_2M_ADDRESS_MASK_64;
+ for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
+ NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) | (PageEntry->Uint64 & PAGE_PROGATE_BITS);
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN)NewPageEntry, SIZE_4KB);
+
+ PageEntry->Uint64 = (UINT64)(UINTN)NewPageEntry;
+ SetSecondLevelPagingEntryAttribute (PageEntry, EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE);
+ FlushPageTableMemory (VTdUnitInfo, (UINTN)PageEntry, sizeof(*PageEntry));
+ return RETURN_SUCCESS;
+ } else {
+ return RETURN_UNSUPPORTED;
+ }
+ } else if (PageAttribute == Page1G) {
+ //
+ // Split 1G to 2M
+ // No need support 1G->4K directly, we should use 1G->2M, then 2M->4K to get more compact page table.
+ //
+ ASSERT (SplitAttribute == Page2M || SplitAttribute == Page4K);
+ if ((SplitAttribute == Page2M || SplitAttribute == Page4K)) {
+ NewPageEntry = AllocateZeroPages (1);
+ DEBUG ((DEBUG_INFO, "Split - 0x%x\n", NewPageEntry));
+ if (NewPageEntry == NULL) {
+ return RETURN_OUT_OF_RESOURCES;
+ }
+ BaseAddress = PageEntry->Uint64 & PAGING_1G_ADDRESS_MASK_64;
+ for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
+ NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) | VTD_PG_PS | (PageEntry->Uint64 & PAGE_PROGATE_BITS);
+ }
+ FlushPageTableMemory (VTdUnitInfo, (UINTN)NewPageEntry, SIZE_4KB);
+
+ PageEntry->Uint64 = (UINT64)(UINTN)NewPageEntry;
+ SetSecondLevelPagingEntryAttribute (PageEntry, EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE);
+ FlushPageTableMemory (VTdUnitInfo, (UINTN)PageEntry, sizeof(*PageEntry));
+ return RETURN_SUCCESS;
+ } else {
+ return RETURN_UNSUPPORTED;
+ }
+ } else {
+ return RETURN_UNSUPPORTED;
+ }
+}
+
+/**
+ Set VTd attribute for a system memory on second level page entry
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+ @param[in] SecondLevelPagingEntry The second level paging entry in VTd table for the device.
+ @param[in] BaseAddress The base of device memory address to be used as the DMA memory.
+ @param[in] Length The length of device memory address to be used as the DMA memory.
+ @param[in] IoMmuAccess The IOMMU access.
+
+ @retval EFI_SUCCESS The IoMmuAccess is set for the memory range specified by BaseAddress and Length.
+ @retval EFI_INVALID_PARAMETER BaseAddress is not IoMmu Page size aligned.
+ @retval EFI_INVALID_PARAMETER Length is not IoMmu Page size aligned.
+ @retval EFI_INVALID_PARAMETER Length is 0.
+ @retval EFI_INVALID_PARAMETER IoMmuAccess specified an illegal combination of access.
+ @retval EFI_UNSUPPORTED The bit mask of IoMmuAccess is not supported by the IOMMU.
+ @retval EFI_UNSUPPORTED The IOMMU does not support the memory range specified by BaseAddress and Length.
+ @retval EFI_OUT_OF_RESOURCES There are not enough resources available to modify the IOMMU access.
+ @retval EFI_DEVICE_ERROR The IOMMU device reported an error while attempting the operation.
+**/
+EFI_STATUS
+SetSecondLevelPagingAttribute (
+ IN VTD_UNIT_INFO *VTdUnitInfo,
+ IN VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry,
+ IN UINT64 BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 IoMmuAccess
+ )
+{
+ VTD_SECOND_LEVEL_PAGING_ENTRY *PageEntry;
+ PAGE_ATTRIBUTE PageAttribute;
+ UINTN PageEntryLength;
+ PAGE_ATTRIBUTE SplitAttribute;
+ EFI_STATUS Status;
+ BOOLEAN IsEntryModified;
+
+ DEBUG ((DEBUG_INFO, "SetSecondLevelPagingAttribute (0x%016lx - 0x%016lx : %x) \n", BaseAddress, Length, IoMmuAccess));
+ DEBUG ((DEBUG_INFO, " SecondLevelPagingEntry Base - 0x%x\n", SecondLevelPagingEntry));
+
+ if (BaseAddress != ALIGN_VALUE(BaseAddress, SIZE_4KB)) {
+ DEBUG ((DEBUG_ERROR, "SetSecondLevelPagingAttribute - Invalid Alignment\n"));
+ return EFI_UNSUPPORTED;
+ }
+ if (Length != ALIGN_VALUE(Length, SIZE_4KB)) {
+ DEBUG ((DEBUG_ERROR, "SetSecondLevelPagingAttribute - Invalid Alignment\n"));
+ return EFI_UNSUPPORTED;
+ }
+
+ while (Length != 0) {
+ PageEntry = GetSecondLevelPageTableEntry (VTdUnitInfo, SecondLevelPagingEntry, BaseAddress, &PageAttribute);
+ if (PageEntry == NULL) {
+ DEBUG ((DEBUG_ERROR, "PageEntry - NULL\n"));
+ return RETURN_UNSUPPORTED;
+ }
+ PageEntryLength = PageAttributeToLength (PageAttribute);
+ SplitAttribute = NeedSplitPage (BaseAddress, Length, PageAttribute);
+ if (SplitAttribute == PageNone) {
+ ConvertSecondLevelPageEntryAttribute (VTdUnitInfo, PageEntry, IoMmuAccess, &IsEntryModified);
+ if (IsEntryModified) {
+ //mVtdUnitInformation[VtdIndex].HasDirtyPages = TRUE;
+ }
+ //
+ // Convert success, move to next
+ //
+ BaseAddress += PageEntryLength;
+ Length -= PageEntryLength;
+ } else {
+ Status = SplitSecondLevelPage (VTdUnitInfo, PageEntry, PageAttribute, SplitAttribute);
+ if (RETURN_ERROR (Status)) {
+ DEBUG ((DEBUG_ERROR, "SplitSecondLevelPage - %r\n", Status));
+ return RETURN_UNSUPPORTED;
+ }
+ //mVtdUnitInformation[VtdIndex].HasDirtyPages = TRUE;
+ //
+ // Just split current page
+ // Convert success in next around
+ //
+ }
+ }
+
+ return EFI_SUCCESS;
+}
+
+/**
+ Create Fixed Second Level Paging Entry.
+
+ @param[in] VTdUnitInfo The VTd engine unit information.
+
+ @retval EFI_SUCCESS Setup translation table successfully.
+ @retval EFI_OUT_OF_RESOURCES Setup translation table fail.
+
+**/
+EFI_STATUS
+CreateFixedSecondLevelPagingEntry (
+ IN VTD_UNIT_INFO *VTdUnitInfo
+ )
+{
+ EFI_STATUS Status;
+ UINT64 IoMmuAccess;
+ UINT64 BaseAddress;
+ UINT64 Length;
+ VOID *Hob;
+ DMA_BUFFER_INFO *DmaBufferInfo;
+
+ VTdUnitInfo->FixedSecondLevelPagingEntry = (UINT32) (UINTN) CreateSecondLevelPagingEntryTable (VTdUnitInfo, NULL, 0, SIZE_4GB, 0);
+ if (VTdUnitInfo->FixedSecondLevelPagingEntry == 0) {
+ DEBUG ((DEBUG_ERROR, "FixedSecondLevelPagingEntry is empty\n"));
+ return EFI_OUT_OF_RESOURCES;
+ }
+
+ Hob = GetFirstGuidHob (&mDmaBufferInfoGuid);
+ DmaBufferInfo = GET_GUID_HOB_DATA (Hob);
+ BaseAddress = DmaBufferInfo->DmaBufferBase;
+ Length = DmaBufferInfo->DmaBufferLimit - DmaBufferInfo->DmaBufferBase;
+ IoMmuAccess = EDKII_IOMMU_ACCESS_READ | EDKII_IOMMU_ACCESS_WRITE;
+
+ DEBUG ((DEBUG_INFO, " BaseAddress = 0x%lx\n", BaseAddress));
+ DEBUG ((DEBUG_INFO, " Length = 0x%lx\n", Length));
+ DEBUG ((DEBUG_INFO, " IoMmuAccess = 0x%lx\n", IoMmuAccess));
+
+ Status = SetSecondLevelPagingAttribute (VTdUnitInfo, (VTD_SECOND_LEVEL_PAGING_ENTRY*) (UINTN) VTdUnitInfo->FixedSecondLevelPagingEntry, BaseAddress, Length, IoMmuAccess);
+
+ return Status;
+}
+/**
+ Setup VTd translation table.
+
+ @param[in] VTdInfo The VTd engine context information.
+
+ @retval EFI_SUCCESS Setup translation table successfully.
+ @retval EFI_OUT_OF_RESOURCES Setup translation table fail.
+
+**/
+EFI_STATUS
+SetupTranslationTable (
+ IN VTD_INFO *VTdInfo
+ )
+{
+ EFI_STATUS Status;
+ UINTN Index;
+ VTD_UNIT_INFO *VtdUnitInfo;
+
+ for (Index = 0; Index < VTdInfo->VTdEngineCount; Index++) {
+ VtdUnitInfo = &VTdInfo->VtdUnitInfo[Index];
+
+ Status = CreateFixedSecondLevelPagingEntry (VtdUnitInfo);
+ if (EFI_ERROR (Status)) {
+ DEBUG ((DEBUG_INFO, "CreateFixedSecondLevelPagingEntry failed - %r\n", Status));
+ return Status;
+ }
+
+ if (VtdUnitInfo->ECapReg.Bits.ECS) {
+ DEBUG ((DEBUG_INFO, "CreateExtContextEntry - %d\n", Index));
+ Status = CreateExtContextEntry (VtdUnitInfo);
+ } else {
+ DEBUG ((DEBUG_INFO, "CreateContextEntry - %d\n", Index));
+ Status = CreateContextEntry (VtdUnitInfo);
+ }
+ if (EFI_ERROR (Status)) {
+ return Status;
+ }
+ }
+ return EFI_SUCCESS;
+}
+
+/**
+ Find the VTd index by the Segment and SourceId.
+
+ @param[in] VTdInfo The VTd engine context information.
+ @param[in] Segment The segment of the source.
+ @param[in] SourceId The SourceId of the source.
+ @param[out] ExtContextEntry The ExtContextEntry of the source.
+ @param[out] ContextEntry The ContextEntry of the source.
+
+ @return The index of the VTd engine.
+ @retval (UINTN)-1 The VTd engine is not found.
+**/
+UINTN
+FindVtdIndexBySegmentSourceId (
+ IN VTD_INFO *VTdInfo,
+ IN UINT16 Segment,
+ IN VTD_SOURCE_ID SourceId,
+ OUT VTD_EXT_CONTEXT_ENTRY **ExtContextEntry,
+ OUT VTD_CONTEXT_ENTRY **ContextEntry
+ )
+{
+ UINTN VtdIndex;
+ VTD_ROOT_ENTRY *RootEntryBase;
+ VTD_ROOT_ENTRY *RootEntry;
+ VTD_CONTEXT_ENTRY *ContextEntryTable;
+ VTD_CONTEXT_ENTRY *ThisContextEntry;
+ VTD_EXT_ROOT_ENTRY *ExtRootEntryBase;
+ VTD_EXT_ROOT_ENTRY *ExtRootEntry;
+ VTD_EXT_CONTEXT_ENTRY *ExtContextEntryTable;
+ VTD_EXT_CONTEXT_ENTRY *ThisExtContextEntry;
+
+ for (VtdIndex = 0; VtdIndex < VTdInfo->VTdEngineCount; VtdIndex++) {
+ if (GetPciDataIndex (&VTdInfo->VtdUnitInfo[VtdIndex], Segment, SourceId) != (UINTN)-1) {
+ DEBUG ((DEBUG_INFO, "Find VtdIndex(0x%x) for S%04x B%02x D%02x F%02x\n", VtdIndex, Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function));
+ break;
+ }
+ }
+ if (VtdIndex >= VTdInfo->VTdEngineCount) {
+ for (VtdIndex = 0; VtdIndex < VTdInfo->VTdEngineCount; VtdIndex++) {
+ if (Segment != VTdInfo->VtdUnitInfo[VtdIndex].Segment) {
+ continue;
+ }
+ if (VTdInfo->VtdUnitInfo[VtdIndex].PciDeviceInfo.IncludeAllFlag) {
+ DEBUG ((DEBUG_INFO, "Find IncludeAllFlag VtdIndex(0x%x) for S%04x B%02x D%02x F%02x\n", VtdIndex, Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function));
+ break;
+ }
+ }
+ }
+
+ if (VtdIndex < VTdInfo->VTdEngineCount) {
+ ExtRootEntryBase = (VTD_EXT_ROOT_ENTRY *) (UINTN) VTdInfo->VtdUnitInfo[VtdIndex].ExtRootEntryTable;
+ if (ExtRootEntryBase != 0) {
+ ExtRootEntry = &ExtRootEntryBase[SourceId.Index.RootIndex];
+ ExtContextEntryTable = (VTD_EXT_CONTEXT_ENTRY *) (UINTN) VTD_64BITS_ADDRESS(ExtRootEntry->Bits.LowerContextTablePointerLo, ExtRootEntry->Bits.LowerContextTablePointerHi) ;
+ ThisExtContextEntry = &ExtContextEntryTable[SourceId.Index.ContextIndex];
+ if (ThisExtContextEntry->Bits.AddressWidth == 0) {
+ DEBUG ((DEBUG_INFO, "ExtContextEntry AddressWidth : 0x%x\n", ThisExtContextEntry->Bits.AddressWidth));
+ return (UINTN)-1;
+ }
+ *ExtContextEntry = ThisExtContextEntry;
+ *ContextEntry = NULL;
+ } else {
+ RootEntryBase = (VTD_ROOT_ENTRY*) (UINTN) VTdInfo->VtdUnitInfo[VtdIndex].RootEntryTable;
+ RootEntry = &RootEntryBase[SourceId.Index.RootIndex];
+ ContextEntryTable = (VTD_CONTEXT_ENTRY *) (UINTN) VTD_64BITS_ADDRESS(RootEntry->Bits.ContextTablePointerLo, RootEntry->Bits.ContextTablePointerHi) ;
+ ThisContextEntry = &ContextEntryTable[SourceId.Index.ContextIndex];
+ if (ThisContextEntry->Bits.AddressWidth == 0) {
+ DEBUG ((DEBUG_INFO, "ContextEntry AddressWidth : 0x%x\n", ThisContextEntry->Bits.AddressWidth));
+ return (UINTN)-1;
+ }
+ *ExtContextEntry = NULL;
+ *ContextEntry = ThisContextEntry;
+ }
+
+ return VtdIndex;
+ }
+
+ return (UINTN)-1;
+}
+
+/**
+ Always enable the VTd page attribute for the device.
+
+ @param[in] VTdInfo The VTd engine context information.
+ @param[in] Segment The Segment used to identify a VTd engine.
+ @param[in] SourceId The SourceId used to identify a VTd engine and table entry.
+ @param[in] MemoryBase The base of the memory.
+ @param[in] MemoryLimit The limit of the memory.
+ @param[in] IoMmuAccess The IOMMU access.
+
+ @retval EFI_SUCCESS The VTd entry is updated to always enable all DMA access for the specific device.
+**/
+EFI_STATUS
+EnableRmrrPageAttribute (
+ IN VTD_INFO *VTdInfo,
+ IN UINT16 Segment,
+ IN VTD_SOURCE_ID SourceId,
+ IN UINT64 MemoryBase,
+ IN UINT64 MemoryLimit,
+ IN UINT64 IoMmuAccess
+ )
+{
+ EFI_STATUS Status;
+ UINTN VtdIndex;
+ VTD_EXT_CONTEXT_ENTRY *ExtContextEntry;
+ VTD_CONTEXT_ENTRY *ContextEntry;
+ VTD_SECOND_LEVEL_PAGING_ENTRY *SecondLevelPagingEntry;
+ UINT64 Pt;
+
+ DEBUG ((DEBUG_INFO, "EnableRmrrPageAttribute (S%04x B%02x D%02x F%02x)\n", Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function));
+
+ VtdIndex = FindVtdIndexBySegmentSourceId (VTdInfo, Segment, SourceId, &ExtContextEntry, &ContextEntry);
+ if (VtdIndex == (UINTN)-1) {
+ DEBUG ((DEBUG_ERROR, "EnableRmrrPageAttribute - Can not locate Pci device (S%04x B%02x D%02x F%02x) !\n", Segment, SourceId.Bits.Bus, SourceId.Bits.Device, SourceId.Bits.Function));
+ return EFI_DEVICE_ERROR;
+ }
+
+ if (VTdInfo->VtdUnitInfo[VtdIndex].RmrrSecondLevelPagingEntry == 0) {
+ DEBUG ((DEBUG_INFO, "CreateSecondLevelPagingEntry - %d\n", VtdIndex));
+ VTdInfo->VtdUnitInfo[VtdIndex].RmrrSecondLevelPagingEntry = (UINT32)(UINTN)CreateSecondLevelPagingEntryTable (&VTdInfo->VtdUnitInfo[VtdIndex], NULL, 0, SIZE_4GB, 0);
+ if (VTdInfo->VtdUnitInfo[VtdIndex].RmrrSecondLevelPagingEntry == 0) {
+ return EFI_OUT_OF_RESOURCES;
+ }
+
+ Status =SetSecondLevelPagingAttribute (&VTdInfo->VtdUnitInfo[VtdIndex], (VTD_SECOND_LEVEL_PAGING_ENTRY*)(UINTN)VTdInfo->VtdUnitInfo[VtdIndex].RmrrSecondLevelPagingEntry, MemoryBase, MemoryLimit + 1 - MemoryBase, IoMmuAccess);
+ if (EFI_ERROR (Status)) {
+ return Status;
+ }
+ }
+
+ SecondLevelPagingEntry = (VTD_SECOND_LEVEL_PAGING_ENTRY *) (UINTN) VTdInfo->VtdUnitInfo[VtdIndex].RmrrSecondLevelPagingEntry;
+ Pt = (UINT64) RShiftU64 ((UINT64) (UINTN) SecondLevelPagingEntry, 12);
+ if (ExtContextEntry != NULL) {
+ ExtContextEntry->Bits.SecondLevelPageTranslationPointerLo = (UINT32) Pt;
+ ExtContextEntry->Bits.SecondLevelPageTranslationPointerHi = (UINT32) RShiftU64(Pt, 20);
+ ExtContextEntry->Bits.DomainIdentifier = ((1 << (UINT8) ((UINTN) VTdInfo->VtdUnitInfo[VtdIndex].CapReg.Bits.ND * 2 + 4)) - 1);
+ ExtContextEntry->Bits.Present = 1;
+ FlushPageTableMemory (&VTdInfo->VtdUnitInfo[VtdIndex], (UINTN) ExtContextEntry, sizeof(*ExtContextEntry));
+ } else if (ContextEntry != NULL) {
+ ContextEntry->Bits.SecondLevelPageTranslationPointerLo = (UINT32) Pt;
+ ContextEntry->Bits.SecondLevelPageTranslationPointerHi = (UINT32) RShiftU64 (Pt, 20);
+ ContextEntry->Bits.DomainIdentifier = ((1 << (UINT8) ((UINTN) VTdInfo->VtdUnitInfo[VtdIndex].CapReg.Bits.ND * 2 + 4)) - 1);
+ ContextEntry->Bits.Present = 1;
+ FlushPageTableMemory (&VTdInfo->VtdUnitInfo[VtdIndex], (UINTN) ContextEntry, sizeof (*ContextEntry));
+ }
+
+ return EFI_SUCCESS;
+}
--
2.16.2.windows.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH v7 2/2] IntelSiliconPkg/VTd: Add IntelVTdDmarPei to IntelSiliconPkg
2021-01-08 2:49 [PATCH v7 0/2] Add IntelVTdDmarPei Driver Sheng Wei
2021-01-08 2:49 ` [PATCH v7 1/2] IntelSiliconPkg/VTd: " Sheng Wei
@ 2021-01-08 2:49 ` Sheng Wei
1 sibling, 0 replies; 3+ messages in thread
From: Sheng Wei @ 2021-01-08 2:49 UTC (permalink / raw)
To: devel; +Cc: Ray Ni, Rangasai V Chaganty, Jiewen Yao, Jenny Huang, Feng Roger
IntelVTdDmarPei driver is used to do the pre-boot DMA protection by DMAR.
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3095
Signed-off-by: Sheng Wei <w.sheng@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Rangasai V Chaganty <rangasai.v.chaganty@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jenny Huang <jenny.huang@intel.com>
Cc: Feng Roger <roger.feng@intel.com>
Reviewed-by: Jenny Huang <jenny.huang@intel.com>
---
Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dsc | 1 +
1 file changed, 1 insertion(+)
diff --git a/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dsc b/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dsc
index 029b9156..6dff68f6 100644
--- a/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dsc
+++ b/Silicon/Intel/IntelSiliconPkg/IntelSiliconPkg.dsc
@@ -80,6 +80,7 @@
IntelSiliconPkg/Feature/PcieSecurity/IntelPciDeviceSecurityDxe/IntelPciDeviceSecurityDxe.inf
IntelSiliconPkg/Feature/PcieSecurity/SamplePlatformDevicePolicyDxe/SamplePlatformDevicePolicyDxe.inf
IntelSiliconPkg/Feature/VTd/IntelVTdDxe/IntelVTdDxe.inf
+ IntelSiliconPkg/Feature/VTd/IntelVTdDmarPei/IntelVTdDmarPei.inf
IntelSiliconPkg/Feature/VTd/IntelVTdPmrPei/IntelVTdPmrPei.inf
IntelSiliconPkg/Feature/VTd/PlatformVTdSampleDxe/PlatformVTdSampleDxe.inf
IntelSiliconPkg/Feature/VTd/PlatformVTdInfoSamplePei/PlatformVTdInfoSamplePei.inf
--
2.16.2.windows.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-01-08 2:49 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-01-08 2:49 [PATCH v7 0/2] Add IntelVTdDmarPei Driver Sheng Wei
2021-01-08 2:49 ` [PATCH v7 1/2] IntelSiliconPkg/VTd: " Sheng Wei
2021-01-08 2:49 ` [PATCH v7 2/2] IntelSiliconPkg/VTd: Add IntelVTdDmarPei to IntelSiliconPkg Sheng Wei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox