public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [PATCH v2 0/2] WIP: Enable CMO support for RiscV64
@ 2023-03-31 10:01 Dhaval Sharma
  2023-03-31 10:01 ` [PATCH v2 1/2] WIP: MdePkg/RiscVCMOCacheMaintenanceLib:Enable RISCV CMO Dhaval Sharma
  2023-03-31 10:01 ` [PATCH v2 2/2] OvmfPkg/RiscVVirt: Enable CMO support Dhaval Sharma
  0 siblings, 2 replies; 3+ messages in thread
From: Dhaval Sharma @ 2023-03-31 10:01 UTC (permalink / raw)
  To: devel

Current implementation for cache management (instruction/data flush/invd)
depends on fence.i instruction. All RV platforms may not use the same
method for cache management. Instead RV defines CMO Cache management
operations specification which consits of cbo.x instructions for cache
management. However it requires GCC12+ to enable the same. Need to decide
how cbo based implementation coexists with ifence based implementation
with GCC version dependency.

This patchset is primarily to review the same and decide path forward.
review branch: https://github.com/rivosinc/edk2/tree/dev_rv_cmo_v3


Dhaval Sharma (2):
  WIP: MdePkg/RiscVCMOCacheMaintenanceLib:Enable RISCV CMO
  OvmfPkg/RiscVVirt: Enable CMO support

 MdePkg/MdePkg.dsc                                                          |   1 +
 OvmfPkg/RiscVVirt/RiscVVirtQemu.dsc                                        |   9 +
 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.inf |  30 ++
 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c                 | 377 ++++++++++++++++++++
 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.uni |  11 +
 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S              |  23 ++
 6 files changed, 451 insertions(+)
 create mode 100644 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.inf
 create mode 100644 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c
 create mode 100644 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.uni
 create mode 100644 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S

-- 
2.40.0.rc0.57.g454dfcbddf


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH v2 1/2] WIP: MdePkg/RiscVCMOCacheMaintenanceLib:Enable RISCV CMO
  2023-03-31 10:01 [PATCH v2 0/2] WIP: Enable CMO support for RiscV64 Dhaval Sharma
@ 2023-03-31 10:01 ` Dhaval Sharma
  2023-03-31 10:01 ` [PATCH v2 2/2] OvmfPkg/RiscVVirt: Enable CMO support Dhaval Sharma
  1 sibling, 0 replies; 3+ messages in thread
From: Dhaval Sharma @ 2023-03-31 10:01 UTC (permalink / raw)
  To: devel
  Cc: Sunil V L, Andrei Warkentin, Da Michael D Kinney, Liming Gao,
	Zhiguang Liu, niel Schaefer

Adding code to support Cache Management Operations
(CMO) defined by RV spec https://github.com/riscv/riscv-CMOs
Notes:
1. CMO only supports block based Operations. Meaning complete
cache flush/invd/clean Operations are not available
2. Current implementation uses ifence instructions but it
maybe platform specific. Many platforms may not support cache
Operations based on fence.i
3. For now adding CMO on top of fence.i as it is not supposed to
have any adverse effect on CMO operation.
4. This requires support for GCC12.2 onwards.

Test:
1. Ensured correct instructions are refelecting in asm
2. Able to boot platform with RiscVVirtQemu config
3. Not able to verify actual instruction in HW as Qemu ignores
any actual cache operations.

Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Cc: Da  Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: niel Schaefer <git@danielschaefer.me>

Signed-off-by: Dhaval Sharma <dhaval@rivosinc.com>
---

Notes:
    v2:
    - Added separate CMO Lib to RiscV instead of mixing it up
      with existing BaseCacheMaintenanceLib that has fence.i based
      implementation. With this we have flexibility to chose the
      library based on configurable option in dsc.

 MdePkg/MdePkg.dsc                                                          |   1 +
 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.inf |  30 ++
 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c                 | 377 ++++++++++++++++++++
 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.uni |  11 +
 MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S              |  23 ++
 5 files changed, 442 insertions(+)

diff --git a/MdePkg/MdePkg.dsc b/MdePkg/MdePkg.dsc
index 0ac7618b4623..78870c916433 100644
--- a/MdePkg/MdePkg.dsc
+++ b/MdePkg/MdePkg.dsc
@@ -192,5 +192,6 @@ [Components.ARM, Components.AARCH64]
 
 [Components.RISCV64]
   MdePkg/Library/BaseRiscVSbiLib/BaseRiscVSbiLib.inf
+  MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.inf
 
 [BuildOptions]
diff --git a/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.inf b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.inf
new file mode 100644
index 000000000000..b36a0d97332b
--- /dev/null
+++ b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.inf
@@ -0,0 +1,30 @@
+## @file
+#  RISCV64 CMO Cache Maintenance Library implementation.
+#
+#  Copyright (c) 2023, Rivos Inc. All rights reserved.<BR>
+#  SPDX-License-Identifier: BSD-2-Clause-Patent
+#
+##
+
+[Defines]
+  INF_VERSION                    = 0x00010005
+  BASE_NAME                      = RiscVCMOCacheMaintenanceLib
+  MODULE_UNI_FILE                = RiscVCMOCacheMaintenanceLib.uni
+  FILE_GUID                      = 6F651f1F-CAD5-4059-B1CE-7E60BC624757
+  MODULE_TYPE                    = BASE
+  VERSION_STRING                 = 1.1
+  LIBRARY_CLASS                  = RiscVCMOCacheMaintenanceLib
+
+#
+#  VALID_ARCHITECTURES           = RISCV64
+#
+
+[Sources]
+  RiscVCMOCache.c
+  RiscVCpuCMOCache.S           | GCC
+
+[Packages]
+  MdePkg/MdePkg.dec
+
+[LibraryClasses]
+  DebugLib
diff --git a/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c
new file mode 100644
index 000000000000..37ce294dbabf
--- /dev/null
+++ b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c
@@ -0,0 +1,377 @@
+/** @file
+  RISC-V specific functionality for cache.
+
+  Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All rights reserved.<BR>
+  Copyright (c) 2023, Rivos Inc. All rights reserved.<BR>
+
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+**/
+
+#include <Base.h>
+#include <Library/DebugLib.h>
+
+/**
+  Use runtime discovery mechanism in future when avalable
+  through https://lists.riscv.org/g/tech-privileged/topic/83853282
+**/
+#define RV64_CACHE_BLOCK_SIZE  64
+
+typedef enum {
+  Clean,
+  Flush,
+  Invld,
+} CACHE_OP;
+
+/* Ideally we should do this through BaseLib.h by adding
+   Asm*CacheLine functions. This can be done after Initial
+   RV refactoring is complete. For now call functions directly
+*/
+VOID
+EFIAPI
+RiscVCpuCacheFlush (
+  UINTN
+  );
+
+VOID
+EFIAPI
+RiscVCpuCacheClean (
+  UINTN
+  );
+
+VOID
+EFIAPI
+RiscVCpuCacheInval (
+  UINTN
+  );
+
+/**
+  Performs required opeartion on cache lines in the cache coherency domain
+  of the calling CPU. If Address is not aligned on a cache line boundary,
+  then entire cache line containing Address is operated. If Address + Length
+  is not aligned on a cache line boundary, then the entire cache line
+  containing Address + Length -1 is operated.
+
+  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+  @param  Address The base address of the cache lines to
+          invalidate. If the CPU is in a physical addressing mode,
+          then Address is a physical address. If the CPU is in a virtual
+          addressing mode, then Address is a virtual address.
+
+  @param  Length  The number of bytes to invalidate from the instruction
+          cache.
+
+  @param  Op  Type of CMO operation to be performed
+
+  @return Address.
+
+**/
+VOID *
+EFIAPI
+CacheOpCacheRange (
+  IN VOID      *Address,
+  IN UINTN     Length,
+  IN CACHE_OP  Op
+  )
+{
+  UINTN  CacheLineSize;
+  UINTN  Start;
+  UINTN  End;
+
+  if (Length == 0) {
+    return Address;
+  }
+
+  ASSERT ((Length - 1) <= (MAX_ADDRESS - (UINTN)Address));
+
+  //
+  // Cache line size is 8 * Bits 15-08 of EBX returned from CPUID 01H
+  //
+  CacheLineSize = RV64_CACHE_BLOCK_SIZE;
+
+  Start = (UINTN)Address;
+  //
+  // Calculate the cache line alignment
+  //
+  End    = (Start + Length + (CacheLineSize - 1)) & ~(CacheLineSize - 1);
+  Start &= ~((UINTN)CacheLineSize - 1);
+
+  DEBUG (
+    (DEBUG_INFO,
+     "%a Performing Cache Management Operation %d \n", __func__, Op)
+    );
+
+  do {
+    switch (Op) {
+      case Invld:
+        RiscVCpuCacheInval (Start);
+        break;
+      case Flush:
+        RiscVCpuCacheFlush (Start);
+        break;
+      case Clean:
+        RiscVCpuCacheClean (Start);
+        break;
+      default:
+        DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported operation\n"));
+        break;
+    }
+
+    Start = Start + CacheLineSize;
+  } while (Start != End);
+
+  return Address;
+}
+
+/**
+  RISC-V invalidate instruction cache.
+**/
+VOID
+EFIAPI
+RiscVInvalidateInstCacheAsm (
+  VOID
+  );
+
+/**
+  RISC-V invalidate data cache.
+**/
+VOID
+EFIAPI
+RiscVInvalidateDataCacheAsm (
+  VOID
+  );
+
+/**
+  Invalidates the entire instruction cache in cache coherency domain of the
+  calling CPU. This may not clear $IC on all RV implementations.
+  RV CMO only offers block operations as per spec. Entire cache invd will be
+  platform dependent implementation.
+
+**/
+VOID
+EFIAPI
+InvalidateInstructionCache (
+  VOID
+  )
+{
+  RiscVInvalidateInstCacheAsm ();
+}
+
+/**
+  Invalidates a range of instruction cache lines in the cache coherency domain
+  of the calling CPU.
+
+  Invalidates the instruction cache lines specified by Address and Length. If
+  Address is not aligned on a cache line boundary, then entire instruction
+  cache line containing Address is invalidated. If Address + Length is not
+  aligned on a cache line boundary, then the entire instruction cache line
+  containing Address + Length -1 is invalidated. This function may choose to
+  invalidate the entire instruction cache if that is more efficient than
+  invalidating the specified range. If Length is 0, then no instruction cache
+  lines are invalidated. Address is returned.
+
+  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+  @param  Address The base address of the instruction cache lines to
+                  invalidate. If the CPU is in a physical addressing mode, then
+                  Address is a physical address. If the CPU is in a virtual
+                  addressing mode, then Address is a virtual address.
+
+  @param  Length  The number of bytes to invalidate from the instruction cache.
+
+  @return Address.
+
+**/
+VOID *
+EFIAPI
+InvalidateInstructionCacheRange (
+  IN VOID   *Address,
+  IN UINTN  Length
+  )
+{
+  DEBUG (
+    (DEBUG_ERROR,
+     "%a:RISC-V unsupported function.\n"
+     "Invalidating the whole instruction cache instead.\n", __func__)
+    );
+  InvalidateInstructionCache ();
+  // RV does not support $I specific operation.
+  CacheOpCacheRange (Address, Length, Invld);
+  return Address;
+}
+
+/**
+  Writes back and invalidates the entire data cache in cache coherency domain
+  of the calling CPU.
+
+  Writes back and invalidates the entire data cache in cache coherency domain
+  of the calling CPU. This function guarantees that all dirty cache lines are
+  written back to system memory, and also invalidates all the data cache lines
+  in the cache coherency domain of the calling CPU.
+  RV CMO only offers block operations as per spec. Entire cache invd will be
+  platform dependent implementation.
+
+**/
+VOID
+EFIAPI
+WriteBackInvalidateDataCache (
+  VOID
+  )
+{
+  DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported function.\n", __func__));
+}
+
+/**
+  Writes back and invalidates a range of data cache lines in the cache
+  coherency domain of the calling CPU.
+
+  Writes back and invalidates the data cache lines specified by Address and
+  Length. If Address is not aligned on a cache line boundary, then entire data
+  cache line containing Address is written back and invalidated. If Address +
+  Length is not aligned on a cache line boundary, then the entire data cache
+  line containing Address + Length -1 is written back and invalidated. This
+  function may choose to write back and invalidate the entire data cache if
+  that is more efficient than writing back and invalidating the specified
+  range. If Length is 0, then no data cache lines are written back and
+  invalidated. Address is returned.
+
+  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+  @param  Address The base address of the data cache lines to write back and
+                  invalidate. If the CPU is in a physical addressing mode, then
+                  Address is a physical address. If the CPU is in a virtual
+                  addressing mode, then Address is a virtual address.
+  @param  Length  The number of bytes to write back and invalidate from the
+                  data cache.
+
+  @return Address of cache invalidation.
+
+**/
+VOID *
+EFIAPI
+WriteBackInvalidateDataCacheRange (
+  IN      VOID   *Address,
+  IN      UINTN  Length
+  )
+{
+  CacheOpCacheRange (Address, Length, Flush);
+  return Address;
+}
+
+/**
+  Writes back the entire data cache in cache coherency domain of the calling
+  CPU.
+
+  Writes back the entire data cache in cache coherency domain of the calling
+  CPU. This function guarantees that all dirty cache lines are written back to
+  system memory. This function may also invalidate all the data cache lines in
+  the cache coherency domain of the calling CPU.
+  RV CMO only offers block operations as per spec. Entire cache invd will be
+  platform dependent implementation.
+
+**/
+VOID
+EFIAPI
+WriteBackDataCache (
+  VOID
+  )
+{
+  DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported function.\n", __func__));
+}
+
+/**
+  Writes back a range of data cache lines in the cache coherency domain of the
+  calling CPU.
+
+  Writes back the data cache lines specified by Address and Length. If Address
+  is not aligned on a cache line boundary, then entire data cache line
+  containing Address is written back. If Address + Length is not aligned on a
+  cache line boundary, then the entire data cache line containing Address +
+  Length -1 is written back. This function may choose to write back the entire
+  data cache if that is more efficient than writing back the specified range.
+  If Length is 0, then no data cache lines are written back. This function may
+  also invalidate all the data cache lines in the specified range of the cache
+  coherency domain of the calling CPU. Address is returned.
+
+  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+  @param  Address The base address of the data cache lines to write back. If
+                  the CPU is in a physical addressing mode, then Address is a
+                  physical address. If the CPU is in a virtual addressing
+                  mode, then Address is a virtual address.
+  @param  Length  The number of bytes to write back from the data cache.
+
+  @return Address of cache written in main memory.
+
+**/
+VOID *
+EFIAPI
+WriteBackDataCacheRange (
+  IN      VOID   *Address,
+  IN      UINTN  Length
+  )
+{
+  CacheOpCacheRange (Address, Length, Clean);
+  return Address;
+}
+
+/**
+  Invalidates the entire data cache in cache coherency domain of the calling
+  CPU.
+
+  Invalidates the entire data cache in cache coherency domain of the calling
+  CPU. This function must be used with care because dirty cache lines are not
+  written back to system memory. It is typically used for cache diagnostics. If
+  the CPU does not support invalidation of the entire data cache, then a write
+  back and invalidate operation should be performed on the entire data cache.
+  RV CMO only offers block operations as per spec. Entire cache invd will be
+  platform dependent implementation.
+
+**/
+VOID
+EFIAPI
+InvalidateDataCache (
+  VOID
+  )
+{
+  RiscVInvalidateDataCacheAsm ();
+}
+
+/**
+  Invalidates a range of data cache lines in the cache coherency domain of the
+  calling CPU.
+
+  Invalidates the data cache lines specified by Address and Length. If Address
+  is not aligned on a cache line boundary, then entire data cache line
+  containing Address is invalidated. If Address + Length is not aligned on a
+  cache line boundary, then the entire data cache line containing Address +
+  Length -1 is invalidated. This function must never invalidate any cache lines
+  outside the specified range. If Length is 0, then no data cache lines are
+  invalidated. Address is returned. This function must be used with care
+  because dirty cache lines are not written back to system memory. It is
+  typically used for cache diagnostics. If the CPU does not support
+  invalidation of a data cache range, then a write back and invalidate
+  operation should be performed on the data cache range.
+
+  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+  @param  Address The base address of the data cache lines to invalidate. If
+                  the CPU is in a physical addressing mode, then Address is a
+                  physical address. If the CPU is in a virtual addressing mode,
+                  then Address is a virtual address.
+  @param  Length  The number of bytes to invalidate from the data cache.
+
+  @return Address.
+
+**/
+VOID *
+EFIAPI
+InvalidateDataCacheRange (
+  IN      VOID   *Address,
+  IN      UINTN  Length
+  )
+{
+  // RV does not support $D specific operation.
+  CacheOpCacheRange (Address, Length, Invld);
+  return Address;
+}
diff --git a/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.uni b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.uni
new file mode 100644
index 000000000000..1d16d88e6c15
--- /dev/null
+++ b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.uni
@@ -0,0 +1,11 @@
+// /** @file
+// RiscV Cache Maintenance Library implementation.
+//
+// Copyright (c) 2023, Rivos Inc. All rights reserved.<BR>
+// SPDX-License-Identifier: BSD-2-Clause-Patent
+//
+// **/
+
+#string STR_MODULE_ABSTRACT             #language en-US "instance of RiscV Cache Maintenance Library"
+
+#string STR_MODULE_DESCRIPTION          #language en-US "instance of Riscv Cache Maintenance Library."
diff --git a/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S
new file mode 100644
index 000000000000..0cf054da7703
--- /dev/null
+++ b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S
@@ -0,0 +1,23 @@
+// ------------------------------------------------------------------------------
+//
+// CpuPause for RISC-V
+//
+// Copyright (c) 2022, Rivos Inc. All rights reserved.<BR>
+//
+// SPDX-License-Identifier: BSD-2-Clause-Patent
+//
+// ------------------------------------------------------------------------------
+ASM_GLOBAL ASM_PFX (RiscVCpuCacheFlush)
+ASM_PFX (RiscVCpuCacheFlush) :
+  cbo.flush (a0)
+  ret
+
+ASM_GLOBAL ASM_PFX (RiscVCpuCacheClean)
+ASM_PFX (RiscVCpuCacheClean) :
+  cbo.clean (a0)
+  ret
+
+ASM_GLOBAL ASM_PFX (RiscVCpuCacheInval)
+ASM_PFX (RiscVCpuCacheInval) :
+  cbo.inval (a0)
+  ret
-- 
2.40.0.rc0.57.g454dfcbddf


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH v2 2/2] OvmfPkg/RiscVVirt: Enable CMO support
  2023-03-31 10:01 [PATCH v2 0/2] WIP: Enable CMO support for RiscV64 Dhaval Sharma
  2023-03-31 10:01 ` [PATCH v2 1/2] WIP: MdePkg/RiscVCMOCacheMaintenanceLib:Enable RISCV CMO Dhaval Sharma
@ 2023-03-31 10:01 ` Dhaval Sharma
  1 sibling, 0 replies; 3+ messages in thread
From: Dhaval Sharma @ 2023-03-31 10:01 UTC (permalink / raw)
  To: devel
  Cc: Ard Biesheuvel, Jiewen Yao, Jordan Justen, Gerd Hoffmann,
	Sunil V L, Andrei Warkentin

Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Signed-off-by: Dhaval Sharma <dhaval@rivosinc.com>

Add support for Cache Management Operations by conditionally
adding CMO library.
---

Notes:
    v2:
    - Updated RiscVCMOCacheManagementLib as a separate CMO library

 OvmfPkg/RiscVVirt/RiscVVirtQemu.dsc | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/OvmfPkg/RiscVVirt/RiscVVirtQemu.dsc b/OvmfPkg/RiscVVirt/RiscVVirtQemu.dsc
index 28d9af4d79b9..16c714625870 100644
--- a/OvmfPkg/RiscVVirt/RiscVVirtQemu.dsc
+++ b/OvmfPkg/RiscVVirt/RiscVVirtQemu.dsc
@@ -46,6 +46,12 @@ [Defines]
   DEFINE NETWORK_ALLOW_HTTP_CONNECTIONS = TRUE
   DEFINE NETWORK_ISCSI_ENABLE           = FALSE
 
+#
+# CMO support for RV. It depends on 2 factors. First support in compiler
+# GCC:Binutils 2.39 (GCC12.2+) is required.
+#
+  DEFINE RV_CMO_FEATURE_AVAILABLE = FALSE
+
 !if $(NETWORK_SNP_ENABLE) == TRUE
   !error "NETWORK_SNP_ENABLE is IA32/X64/EBC only"
 !endif
@@ -112,6 +118,9 @@ [LibraryClasses.common]
   TpmPlatformHierarchyLib|SecurityPkg/Library/PeiDxeTpmPlatformHierarchyLibNull/PeiDxeTpmPlatformHierarchyLib.inf
 !endif
 
+!if $(RV_CMO_FEATURE_AVAILABLE) == TRUE
+   CacheMaintenanceLib|MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.inf
+!endif
 [LibraryClasses.common.DXE_DRIVER]
   ReportStatusCodeLib|MdeModulePkg/Library/DxeReportStatusCodeLib/DxeReportStatusCodeLib.inf
   PciExpressLib|OvmfPkg/Library/BaseCachingPciExpressLib/BaseCachingPciExpressLib.inf
-- 
2.40.0.rc0.57.g454dfcbddf


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-03-31 10:01 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-31 10:01 [PATCH v2 0/2] WIP: Enable CMO support for RiscV64 Dhaval Sharma
2023-03-31 10:01 ` [PATCH v2 1/2] WIP: MdePkg/RiscVCMOCacheMaintenanceLib:Enable RISCV CMO Dhaval Sharma
2023-03-31 10:01 ` [PATCH v2 2/2] OvmfPkg/RiscVVirt: Enable CMO support Dhaval Sharma

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox