public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Dhaval Sharma" <dhaval@rivosinc.com>
To: devel@edk2.groups.io
Cc: Sunil V L <sunilvl@ventanamicro.com>,
	Andrei Warkentin <andrei.warkentin@intel.com>,
	Daniel Schaefer <git@danielschaefer.me>
Subject: [PATCH v1 1/2] MdePkg/BaseCacheMaintenanceLib: Enable RISCV CMO
Date: Fri, 24 Mar 2023 21:13:41 +0530	[thread overview]
Message-ID: <20230324154342.180062-2-dhaval@rivosinc.com> (raw)
In-Reply-To: <20230324154342.180062-1-dhaval@rivosinc.com>

Adding code to support Cache Management Operations
(CMO) defined by RV spec https://github.com/riscv/riscv-CMOs
Notes:
1. CMO only supports block based Operations. Meaning complete
cache flush/invd/clean Operations are not available
2. Current implementation uses ifence instructions but it
maybe platform specific. Many platforms may not support cache
Operations based on ifence.
3. For now adding CMO on top of ifence as it is not considered
harmful.
4. This requires support for GCC12.2 onwards.

Test:
1. Ensured correct instructions are refelecting in asm
2. Able to boot platform with RiscVVirtQemu config
3. Not able to verify actual instruction in HW as Qemu ignores
any actual cache operations.

Cc: Sunil V L <sunilvl@ventanamicro.com>
Cc: Andrei Warkentin <andrei.warkentin@intel.com>
Cc: Daniel Schaefer <git@danielschaefer.me>
Signed-off-by: Dhaval Sharma <dhaval@rivosinc.com>
---
 MdePkg/Library/BaseLib/BaseLib.inf                  |   1 +
 MdePkg/Library/BaseCacheMaintenanceLib/RiscVCache.c | 126 ++++++++++++++++++--
 MdePkg/Library/BaseLib/RiscV64/RiscVCpuCache.S      |  23 ++++
 3 files changed, 143 insertions(+), 7 deletions(-)

diff --git a/MdePkg/Library/BaseLib/BaseLib.inf b/MdePkg/Library/BaseLib/BaseLib.inf
index 3a48492b1a01..0d6d6b7414c8 100644
--- a/MdePkg/Library/BaseLib/BaseLib.inf
+++ b/MdePkg/Library/BaseLib/BaseLib.inf
@@ -398,6 +398,7 @@ [Sources.RISCV64]
   RiscV64/MemoryFence.S             | GCC
   RiscV64/RiscVSetJumpLongJump.S    | GCC
   RiscV64/RiscVCpuBreakpoint.S      | GCC
+  RiscV64/RiscVCpuCache.S           | GCC
   RiscV64/RiscVCpuPause.S           | GCC
   RiscV64/RiscVInterrupt.S          | GCC
   RiscV64/FlushCache.S              | GCC
diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/RiscVCache.c b/MdePkg/Library/BaseCacheMaintenanceLib/RiscVCache.c
index d08fb9f193ca..8e88b8391a74 100644
--- a/MdePkg/Library/BaseCacheMaintenanceLib/RiscVCache.c
+++ b/MdePkg/Library/BaseCacheMaintenanceLib/RiscVCache.c
@@ -10,9 +10,111 @@
 #include <Library/BaseLib.h>
 #include <Library/DebugLib.h>
 
+/**
+  Use runtime discovery mechanism in future when avalable
+  through https://lists.riscv.org/g/tech-privileged/topic/83853282
+**/
+#define RV64_CACHE_BLOCK_SIZE   64
+
+typedef enum{
+  cln,
+  flsh,
+  invd,
+}CACHE_OP;
+
+/* Ideally we should do this through BaseLib.h by adding
+   Asm*CacheLine functions. This can be done after Initial
+   RV refactoring is complete. For now call functions directly
+*/
+VOID
+EFIAPI RiscVCpuCacheFlush (
+  UINTN
+  );
+
+VOID
+EFIAPI RiscVCpuCacheClean (
+  UINTN
+  );
+
+VOID
+EFIAPI RiscVCpuCacheInval (
+  UINTN
+  );
+
+/**
+  Performs required opeartion on cache lines in the cache coherency domain
+  of the calling CPU. If Address is not aligned on a cache line boundary,
+  then entire cache line containing Address is operated. If Address + Length
+  is not aligned on a cache line boundary, then the entire cache line
+  containing Address + Length -1 is operated.
+
+  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+  @param  Address The base address of the cache lines to
+                  invalidate. If the CPU is in a physical addressing mode, then
+                  Address is a physical address. If the CPU is in a virtual
+                  addressing mode, then Address is a virtual address.
+
+  @param  Length  The number of bytes to invalidate from the instruction cache.
+
+  @return Address.
+
+**/
+
+VOID *
+EFIAPI
+CacheOpCacheRange (
+  IN VOID   *Address,
+  IN UINTN  Length,
+  IN CACHE_OP op
+  )
+{
+  UINTN   CacheLineSize;
+  UINTN   Start;
+  UINTN   End;
+
+  if (Length == 0) {
+    return Address;
+  }
+
+  ASSERT ((Length - 1) <= (MAX_ADDRESS - (UINTN)Address));
+
+  //
+  // Cache line size is 8 * Bits 15-08 of EBX returned from CPUID 01H
+  //
+  CacheLineSize = RV64_CACHE_BLOCK_SIZE;
+
+  Start = (UINTN)Address;
+  //
+  // Calculate the cache line alignment
+  //
+  End    = (Start + Length + (CacheLineSize - 1)) & ~(CacheLineSize - 1);
+  Start &= ~((UINTN)CacheLineSize - 1);
+
+  do {
+    switch (op) {
+      case invd:
+        RiscVCpuCacheInval(Start);
+        break;
+      case flsh:
+        RiscVCpuCacheFlush(Start);
+        break;
+      case cln:
+        RiscVCpuCacheClean(Start);
+        break;
+      default:
+        DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported operation\n"));
+        break;
+    }
+
+    Start = Start + CacheLineSize;
+  } while (Start != End);
+
+  return Address;
+}
+
 /**
   RISC-V invalidate instruction cache.
-
 **/
 VOID
 EFIAPI
@@ -22,7 +124,6 @@ RiscVInvalidateInstCacheAsm (
 
 /**
   RISC-V invalidate data cache.
-
 **/
 VOID
 EFIAPI
@@ -32,7 +133,9 @@ RiscVInvalidateDataCacheAsm (
 
 /**
   Invalidates the entire instruction cache in cache coherency domain of the
-  calling CPU.
+  calling CPU. This may not clear $IC on all RV implementations.
+  RV CMO only offers block operations as per spec. Entire cache invd will be
+  platform dependent implementation.
 
 **/
 VOID
@@ -77,11 +180,13 @@ InvalidateInstructionCacheRange (
   )
 {
   DEBUG (
-    (DEBUG_WARN,
+    (DEBUG_ERROR,
      "%a:RISC-V unsupported function.\n"
      "Invalidating the whole instruction cache instead.\n", __func__)
     );
   InvalidateInstructionCache ();
+  //RV does not support $I specific operation.
+  CacheOpCacheRange(Address, Length, invd);
   return Address;
 }
 
@@ -93,6 +198,8 @@ InvalidateInstructionCacheRange (
   of the calling CPU. This function guarantees that all dirty cache lines are
   written back to system memory, and also invalidates all the data cache lines
   in the cache coherency domain of the calling CPU.
+  RV CMO only offers block operations as per spec. Entire cache invd will be
+  platform dependent implementation.
 
 **/
 VOID
@@ -137,7 +244,7 @@ WriteBackInvalidateDataCacheRange (
   IN      UINTN  Length
   )
 {
-  DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported function.\n", __func__));
+  CacheOpCacheRange(Address, Length, flsh);
   return Address;
 }
 
@@ -149,6 +256,8 @@ WriteBackInvalidateDataCacheRange (
   CPU. This function guarantees that all dirty cache lines are written back to
   system memory. This function may also invalidate all the data cache lines in
   the cache coherency domain of the calling CPU.
+  RV CMO only offers block operations as per spec. Entire cache invd will be
+  platform dependent implementation.
 
 **/
 VOID
@@ -192,7 +301,7 @@ WriteBackDataCacheRange (
   IN      UINTN  Length
   )
 {
-  DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported function.\n", __func__));
+  CacheOpCacheRange(Address, Length, cln);
   return Address;
 }
 
@@ -205,6 +314,8 @@ WriteBackDataCacheRange (
   written back to system memory. It is typically used for cache diagnostics. If
   the CPU does not support invalidation of the entire data cache, then a write
   back and invalidate operation should be performed on the entire data cache.
+  RV CMO only offers block operations as per spec. Entire cache invd will be
+  platform dependent implementation.
 
 **/
 VOID
@@ -250,6 +361,7 @@ InvalidateDataCacheRange (
   IN      UINTN  Length
   )
 {
-  DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported function.\n", __func__));
+  //RV does not support $D specific operation.
+  CacheOpCacheRange(Address, Length, invd);
   return Address;
 }
diff --git a/MdePkg/Library/BaseLib/RiscV64/RiscVCpuCache.S b/MdePkg/Library/BaseLib/RiscV64/RiscVCpuCache.S
new file mode 100644
index 000000000000..0913ed3e9221
--- /dev/null
+++ b/MdePkg/Library/BaseLib/RiscV64/RiscVCpuCache.S
@@ -0,0 +1,23 @@
+//------------------------------------------------------------------------------
+//
+// CpuPause for RISC-V
+//
+// Copyright (c) 2022, Rivos Inc. All rights reserved.<BR>
+//
+// SPDX-License-Identifier: BSD-2-Clause-Patent
+//
+//------------------------------------------------------------------------------
+ASM_GLOBAL ASM_PFX(RiscVCpuCacheFlush)
+ASM_PFX(RiscVCpuCacheFlush):
+  cbo.flush (a0)
+  ret
+
+ASM_GLOBAL ASM_PFX(RiscVCpuCacheClean)
+ASM_PFX(RiscVCpuCacheClean):
+  cbo.clean (a0)
+  ret
+
+ASM_GLOBAL ASM_PFX(RiscVCpuCacheInval)
+ASM_PFX(RiscVCpuCacheInval):
+  cbo.inval (a0)
+  ret
-- 
2.40.0.rc0.57.g454dfcbddf


  reply	other threads:[~2023-03-24 15:43 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-24 15:43 [PATCH v1 0/2] WIP: Enable CMO support for RiscV64 Dhaval Sharma
2023-03-24 15:43 ` Dhaval Sharma [this message]
2023-03-27 15:42   ` [PATCH v1 1/2] MdePkg/BaseCacheMaintenanceLib: Enable RISCV CMO Sunil V L
2023-03-27 17:59     ` Dhaval Sharma
2023-03-28  0:52       ` Sunil V L
2023-03-24 15:43 ` [PATCH v1 2/2] OvmfPkg/RiscVVirt: Enable CMO support Dhaval Sharma
2023-03-27 15:44   ` Sunil V L

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230324154342.180062-2-dhaval@rivosinc.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox