From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by mx.groups.io with SMTP id smtpd.web10.50924.1680256915795381877 for ; Fri, 31 Mar 2023 03:01:56 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=sKeP5K1j; spf=pass (domain: rivosinc.com, ip: 209.85.215.182, mailfrom: dhaval@rivosinc.com) Received: by mail-pg1-f182.google.com with SMTP id d10so13070479pgt.12 for ; Fri, 31 Mar 2023 03:01:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; t=1680256915; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1hg7s/+vbx7wJC8rwORazKMXea2YUDZ0umJPB3Tmr1k=; b=sKeP5K1jI9fhYTIlFCpKPQ27sMHoeK2CeMFOqbIb3wRc0vAl0fEFn8G/94V1NanArR kjkNDPZlde3jsaPclasGmi1+nZ/EGS4aOGQm9a++nwO9DkS3DIyOQutl2XK3hN19HI9N uhY4YTPv9Xwm+J7FA/eVO0JmgJd1FtqZEG3LOzL5kLNlnAgXCFnNTGlxYS6Cu84UPzZX 8raStYaHP5xUY4OuJmm9R3gxrZhqnTSBGBsD2DcPHlV0DwIfEbcrFwqmPWGKZVICsHzX NaLUHil+7khFImHeK3Jrh4q435DdOYZalLZnXJy4bPBk7GkGR7rUolc4Du/UqVWMwuhr ioPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680256915; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1hg7s/+vbx7wJC8rwORazKMXea2YUDZ0umJPB3Tmr1k=; b=KWXAxjmzhRNtVSrXKENTs9ZxSjCUH+x7rNt6wp87czj1XbG+cJOFO9bj4TZBmOVg3w u7fBd8s3Vq+HXKJji+OUL5s/guyXkUbwdYTwA2mr+CufzgdVQ1naiLAN71gQ207CdMaV 7BNwTJ/bwkKlPT8p1h2XP8rltO7JJsZMIkRGHrqyF06E/+UuWlzL3JOx3dK3/1TSNGTD 1pAAi1iaRd+Z6af4rP3I+C1xkZa26bdQ5ZSVJs4tcORvJKunP3oSx+p3J0w1g3yQ4cVi oBDF/UJSHFqb8jqJt/Eu/9afNXVUKZ32/uAe/Wo1mi8ddbsBWNecTdSi72w3wgfTRUaX FwEw== X-Gm-Message-State: AAQBX9cqrMiniRihE2n0D1Gz5DadKL9xtiR7aMKZsEDyJ0mOjhKWmuxo 7CaQg+IHKmuFsKChMVk8pRfXVx634mRC/+bhFLpvog== X-Google-Smtp-Source: AKy350aRySxdEKbW6ongw5PWOk161qhDsUENXWuAA6qbi4+y/Q1ABuW9O6ajoCwYRsiPyqC8qSyXew== X-Received: by 2002:a62:180d:0:b0:628:1852:7343 with SMTP id 13-20020a62180d000000b0062818527343mr25850560pfy.2.1680256914955; Fri, 31 Mar 2023 03:01:54 -0700 (PDT) Return-Path: Received: from dhaval.. ([2402:3a80:8ff:2244:e77a:2b3c:51a8:11da]) by smtp.gmail.com with ESMTPSA id s25-20020aa78d59000000b0059442ec49a2sm1107411pfe.146.2023.03.31.03.01.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 31 Mar 2023 03:01:54 -0700 (PDT) From: "Dhaval Sharma" To: devel@edk2.groups.io Cc: Sunil V L , Andrei Warkentin , Da Michael D Kinney , Liming Gao , Zhiguang Liu , niel Schaefer Subject: [PATCH v2 1/2] WIP: MdePkg/RiscVCMOCacheMaintenanceLib:Enable RISCV CMO Date: Fri, 31 Mar 2023 15:31:45 +0530 Message-Id: <20230331100146.242814-2-dhaval@rivosinc.com> X-Mailer: git-send-email 2.40.0.rc0.57.g454dfcbddf In-Reply-To: <20230331100146.242814-1-dhaval@rivosinc.com> References: <20230331100146.242814-1-dhaval@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Adding code to support Cache Management Operations (CMO) defined by RV spec https://github.com/riscv/riscv-CMOs Notes: 1. CMO only supports block based Operations. Meaning complete cache flush/invd/clean Operations are not available 2. Current implementation uses ifence instructions but it maybe platform specific. Many platforms may not support cache Operations based on fence.i 3. For now adding CMO on top of fence.i as it is not supposed to have any adverse effect on CMO operation. 4. This requires support for GCC12.2 onwards. Test: 1. Ensured correct instructions are refelecting in asm 2. Able to boot platform with RiscVVirtQemu config 3. Not able to verify actual instruction in HW as Qemu ignores any actual cache operations. Cc: Sunil V L Cc: Andrei Warkentin Cc: Da Michael D Kinney Cc: Liming Gao Cc: Zhiguang Liu Cc: niel Schaefer Signed-off-by: Dhaval Sharma --- Notes: v2: - Added separate CMO Lib to RiscV instead of mixing it up with existing BaseCacheMaintenanceLib that has fence.i based implementation. With this we have flexibility to chose the library based on configurable option in dsc. MdePkg/MdePkg.dsc = | 1 + MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.inf= | 30 ++ MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c = | 377 ++++++++++++++++++++ MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.uni= | 11 + MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S = | 23 ++ 5 files changed, 442 insertions(+) diff --git a/MdePkg/MdePkg.dsc b/MdePkg/MdePkg.dsc index 0ac7618b4623..78870c916433 100644 --- a/MdePkg/MdePkg.dsc +++ b/MdePkg/MdePkg.dsc @@ -192,5 +192,6 @@ [Components.ARM, Components.AARCH64] =0D [Components.RISCV64]=0D MdePkg/Library/BaseRiscVSbiLib/BaseRiscVSbiLib.inf=0D + MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLib.i= nf=0D =0D [BuildOptions]=0D diff --git a/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMainte= nanceLib.inf b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMain= tenanceLib.inf new file mode 100644 index 000000000000..b36a0d97332b --- /dev/null +++ b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLi= b.inf @@ -0,0 +1,30 @@ +## @file=0D +# RISCV64 CMO Cache Maintenance Library implementation.=0D +#=0D +# Copyright (c) 2023, Rivos Inc. All rights reserved.
=0D +# SPDX-License-Identifier: BSD-2-Clause-Patent=0D +#=0D +##=0D +=0D +[Defines]=0D + INF_VERSION =3D 0x00010005=0D + BASE_NAME =3D RiscVCMOCacheMaintenanceLib=0D + MODULE_UNI_FILE =3D RiscVCMOCacheMaintenanceLib.uni=0D + FILE_GUID =3D 6F651f1F-CAD5-4059-B1CE-7E60BC624757= =0D + MODULE_TYPE =3D BASE=0D + VERSION_STRING =3D 1.1=0D + LIBRARY_CLASS =3D RiscVCMOCacheMaintenanceLib=0D +=0D +#=0D +# VALID_ARCHITECTURES =3D RISCV64=0D +#=0D +=0D +[Sources]=0D + RiscVCMOCache.c=0D + RiscVCpuCMOCache.S | GCC=0D +=0D +[Packages]=0D + MdePkg/MdePkg.dec=0D +=0D +[LibraryClasses]=0D + DebugLib=0D diff --git a/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c b/M= dePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c new file mode 100644 index 000000000000..37ce294dbabf --- /dev/null +++ b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCache.c @@ -0,0 +1,377 @@ +/** @file=0D + RISC-V specific functionality for cache.=0D +=0D + Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All right= s reserved.
=0D + Copyright (c) 2023, Rivos Inc. All rights reserved.
=0D +=0D + SPDX-License-Identifier: BSD-2-Clause-Patent=0D +**/=0D +=0D +#include =0D +#include =0D +=0D +/**=0D + Use runtime discovery mechanism in future when avalable=0D + through https://lists.riscv.org/g/tech-privileged/topic/83853282=0D +**/=0D +#define RV64_CACHE_BLOCK_SIZE 64=0D +=0D +typedef enum {=0D + Clean,=0D + Flush,=0D + Invld,=0D +} CACHE_OP;=0D +=0D +/* Ideally we should do this through BaseLib.h by adding=0D + Asm*CacheLine functions. This can be done after Initial=0D + RV refactoring is complete. For now call functions directly=0D +*/=0D +VOID=0D +EFIAPI=0D +RiscVCpuCacheFlush (=0D + UINTN=0D + );=0D +=0D +VOID=0D +EFIAPI=0D +RiscVCpuCacheClean (=0D + UINTN=0D + );=0D +=0D +VOID=0D +EFIAPI=0D +RiscVCpuCacheInval (=0D + UINTN=0D + );=0D +=0D +/**=0D + Performs required opeartion on cache lines in the cache coherency domain= =0D + of the calling CPU. If Address is not aligned on a cache line boundary,= =0D + then entire cache line containing Address is operated. If Address + Leng= th=0D + is not aligned on a cache line boundary, then the entire cache line=0D + containing Address + Length -1 is operated.=0D +=0D + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().=0D +=0D + @param Address The base address of the cache lines to=0D + invalidate. If the CPU is in a physical addressing mode,=0D + then Address is a physical address. If the CPU is in a virtual=0D + addressing mode, then Address is a virtual address.=0D +=0D + @param Length The number of bytes to invalidate from the instruction=0D + cache.=0D +=0D + @param Op Type of CMO operation to be performed=0D +=0D + @return Address.=0D +=0D +**/=0D +VOID *=0D +EFIAPI=0D +CacheOpCacheRange (=0D + IN VOID *Address,=0D + IN UINTN Length,=0D + IN CACHE_OP Op=0D + )=0D +{=0D + UINTN CacheLineSize;=0D + UINTN Start;=0D + UINTN End;=0D +=0D + if (Length =3D=3D 0) {=0D + return Address;=0D + }=0D +=0D + ASSERT ((Length - 1) <=3D (MAX_ADDRESS - (UINTN)Address));=0D +=0D + //=0D + // Cache line size is 8 * Bits 15-08 of EBX returned from CPUID 01H=0D + //=0D + CacheLineSize =3D RV64_CACHE_BLOCK_SIZE;=0D +=0D + Start =3D (UINTN)Address;=0D + //=0D + // Calculate the cache line alignment=0D + //=0D + End =3D (Start + Length + (CacheLineSize - 1)) & ~(CacheLineSize - 1)= ;=0D + Start &=3D ~((UINTN)CacheLineSize - 1);=0D +=0D + DEBUG (=0D + (DEBUG_INFO,=0D + "%a Performing Cache Management Operation %d \n", __func__, Op)=0D + );=0D +=0D + do {=0D + switch (Op) {=0D + case Invld:=0D + RiscVCpuCacheInval (Start);=0D + break;=0D + case Flush:=0D + RiscVCpuCacheFlush (Start);=0D + break;=0D + case Clean:=0D + RiscVCpuCacheClean (Start);=0D + break;=0D + default:=0D + DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported operation\n"));=0D + break;=0D + }=0D +=0D + Start =3D Start + CacheLineSize;=0D + } while (Start !=3D End);=0D +=0D + return Address;=0D +}=0D +=0D +/**=0D + RISC-V invalidate instruction cache.=0D +**/=0D +VOID=0D +EFIAPI=0D +RiscVInvalidateInstCacheAsm (=0D + VOID=0D + );=0D +=0D +/**=0D + RISC-V invalidate data cache.=0D +**/=0D +VOID=0D +EFIAPI=0D +RiscVInvalidateDataCacheAsm (=0D + VOID=0D + );=0D +=0D +/**=0D + Invalidates the entire instruction cache in cache coherency domain of th= e=0D + calling CPU. This may not clear $IC on all RV implementations.=0D + RV CMO only offers block operations as per spec. Entire cache invd will = be=0D + platform dependent implementation.=0D +=0D +**/=0D +VOID=0D +EFIAPI=0D +InvalidateInstructionCache (=0D + VOID=0D + )=0D +{=0D + RiscVInvalidateInstCacheAsm ();=0D +}=0D +=0D +/**=0D + Invalidates a range of instruction cache lines in the cache coherency do= main=0D + of the calling CPU.=0D +=0D + Invalidates the instruction cache lines specified by Address and Length.= If=0D + Address is not aligned on a cache line boundary, then entire instruction= =0D + cache line containing Address is invalidated. If Address + Length is not= =0D + aligned on a cache line boundary, then the entire instruction cache line= =0D + containing Address + Length -1 is invalidated. This function may choose = to=0D + invalidate the entire instruction cache if that is more efficient than=0D + invalidating the specified range. If Length is 0, then no instruction ca= che=0D + lines are invalidated. Address is returned.=0D +=0D + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().=0D +=0D + @param Address The base address of the instruction cache lines to=0D + invalidate. If the CPU is in a physical addressing mode,= then=0D + Address is a physical address. If the CPU is in a virtua= l=0D + addressing mode, then Address is a virtual address.=0D +=0D + @param Length The number of bytes to invalidate from the instruction c= ache.=0D +=0D + @return Address.=0D +=0D +**/=0D +VOID *=0D +EFIAPI=0D +InvalidateInstructionCacheRange (=0D + IN VOID *Address,=0D + IN UINTN Length=0D + )=0D +{=0D + DEBUG (=0D + (DEBUG_ERROR,=0D + "%a:RISC-V unsupported function.\n"=0D + "Invalidating the whole instruction cache instead.\n", __func__)=0D + );=0D + InvalidateInstructionCache ();=0D + // RV does not support $I specific operation.=0D + CacheOpCacheRange (Address, Length, Invld);=0D + return Address;=0D +}=0D +=0D +/**=0D + Writes back and invalidates the entire data cache in cache coherency dom= ain=0D + of the calling CPU.=0D +=0D + Writes back and invalidates the entire data cache in cache coherency dom= ain=0D + of the calling CPU. This function guarantees that all dirty cache lines = are=0D + written back to system memory, and also invalidates all the data cache l= ines=0D + in the cache coherency domain of the calling CPU.=0D + RV CMO only offers block operations as per spec. Entire cache invd will = be=0D + platform dependent implementation.=0D +=0D +**/=0D +VOID=0D +EFIAPI=0D +WriteBackInvalidateDataCache (=0D + VOID=0D + )=0D +{=0D + DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported function.\n", __func__));=0D +}=0D +=0D +/**=0D + Writes back and invalidates a range of data cache lines in the cache=0D + coherency domain of the calling CPU.=0D +=0D + Writes back and invalidates the data cache lines specified by Address an= d=0D + Length. If Address is not aligned on a cache line boundary, then entire = data=0D + cache line containing Address is written back and invalidated. If Addres= s +=0D + Length is not aligned on a cache line boundary, then the entire data cac= he=0D + line containing Address + Length -1 is written back and invalidated. Thi= s=0D + function may choose to write back and invalidate the entire data cache i= f=0D + that is more efficient than writing back and invalidating the specified= =0D + range. If Length is 0, then no data cache lines are written back and=0D + invalidated. Address is returned.=0D +=0D + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().=0D +=0D + @param Address The base address of the data cache lines to write back a= nd=0D + invalidate. If the CPU is in a physical addressing mode,= then=0D + Address is a physical address. If the CPU is in a virtua= l=0D + addressing mode, then Address is a virtual address.=0D + @param Length The number of bytes to write back and invalidate from th= e=0D + data cache.=0D +=0D + @return Address of cache invalidation.=0D +=0D +**/=0D +VOID *=0D +EFIAPI=0D +WriteBackInvalidateDataCacheRange (=0D + IN VOID *Address,=0D + IN UINTN Length=0D + )=0D +{=0D + CacheOpCacheRange (Address, Length, Flush);=0D + return Address;=0D +}=0D +=0D +/**=0D + Writes back the entire data cache in cache coherency domain of the calli= ng=0D + CPU.=0D +=0D + Writes back the entire data cache in cache coherency domain of the calli= ng=0D + CPU. This function guarantees that all dirty cache lines are written bac= k to=0D + system memory. This function may also invalidate all the data cache line= s in=0D + the cache coherency domain of the calling CPU.=0D + RV CMO only offers block operations as per spec. Entire cache invd will = be=0D + platform dependent implementation.=0D +=0D +**/=0D +VOID=0D +EFIAPI=0D +WriteBackDataCache (=0D + VOID=0D + )=0D +{=0D + DEBUG ((DEBUG_ERROR, "%a:RISC-V unsupported function.\n", __func__));=0D +}=0D +=0D +/**=0D + Writes back a range of data cache lines in the cache coherency domain of= the=0D + calling CPU.=0D +=0D + Writes back the data cache lines specified by Address and Length. If Add= ress=0D + is not aligned on a cache line boundary, then entire data cache line=0D + containing Address is written back. If Address + Length is not aligned o= n a=0D + cache line boundary, then the entire data cache line containing Address = +=0D + Length -1 is written back. This function may choose to write back the en= tire=0D + data cache if that is more efficient than writing back the specified ran= ge.=0D + If Length is 0, then no data cache lines are written back. This function= may=0D + also invalidate all the data cache lines in the specified range of the c= ache=0D + coherency domain of the calling CPU. Address is returned.=0D +=0D + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().=0D +=0D + @param Address The base address of the data cache lines to write back. = If=0D + the CPU is in a physical addressing mode, then Address i= s a=0D + physical address. If the CPU is in a virtual addressing= =0D + mode, then Address is a virtual address.=0D + @param Length The number of bytes to write back from the data cache.=0D +=0D + @return Address of cache written in main memory.=0D +=0D +**/=0D +VOID *=0D +EFIAPI=0D +WriteBackDataCacheRange (=0D + IN VOID *Address,=0D + IN UINTN Length=0D + )=0D +{=0D + CacheOpCacheRange (Address, Length, Clean);=0D + return Address;=0D +}=0D +=0D +/**=0D + Invalidates the entire data cache in cache coherency domain of the calli= ng=0D + CPU.=0D +=0D + Invalidates the entire data cache in cache coherency domain of the calli= ng=0D + CPU. This function must be used with care because dirty cache lines are = not=0D + written back to system memory. It is typically used for cache diagnostic= s. If=0D + the CPU does not support invalidation of the entire data cache, then a w= rite=0D + back and invalidate operation should be performed on the entire data cac= he.=0D + RV CMO only offers block operations as per spec. Entire cache invd will = be=0D + platform dependent implementation.=0D +=0D +**/=0D +VOID=0D +EFIAPI=0D +InvalidateDataCache (=0D + VOID=0D + )=0D +{=0D + RiscVInvalidateDataCacheAsm ();=0D +}=0D +=0D +/**=0D + Invalidates a range of data cache lines in the cache coherency domain of= the=0D + calling CPU.=0D +=0D + Invalidates the data cache lines specified by Address and Length. If Add= ress=0D + is not aligned on a cache line boundary, then entire data cache line=0D + containing Address is invalidated. If Address + Length is not aligned on= a=0D + cache line boundary, then the entire data cache line containing Address = +=0D + Length -1 is invalidated. This function must never invalidate any cache = lines=0D + outside the specified range. If Length is 0, then no data cache lines ar= e=0D + invalidated. Address is returned. This function must be used with care=0D + because dirty cache lines are not written back to system memory. It is=0D + typically used for cache diagnostics. If the CPU does not support=0D + invalidation of a data cache range, then a write back and invalidate=0D + operation should be performed on the data cache range.=0D +=0D + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().=0D +=0D + @param Address The base address of the data cache lines to invalidate. = If=0D + the CPU is in a physical addressing mode, then Address i= s a=0D + physical address. If the CPU is in a virtual addressing = mode,=0D + then Address is a virtual address.=0D + @param Length The number of bytes to invalidate from the data cache.=0D +=0D + @return Address.=0D +=0D +**/=0D +VOID *=0D +EFIAPI=0D +InvalidateDataCacheRange (=0D + IN VOID *Address,=0D + IN UINTN Length=0D + )=0D +{=0D + // RV does not support $D specific operation.=0D + CacheOpCacheRange (Address, Length, Invld);=0D + return Address;=0D +}=0D diff --git a/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMainte= nanceLib.uni b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMain= tenanceLib.uni new file mode 100644 index 000000000000..1d16d88e6c15 --- /dev/null +++ b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCMOCacheMaintenanceLi= b.uni @@ -0,0 +1,11 @@ +// /** @file=0D +// RiscV Cache Maintenance Library implementation.=0D +//=0D +// Copyright (c) 2023, Rivos Inc. All rights reserved.
=0D +// SPDX-License-Identifier: BSD-2-Clause-Patent=0D +//=0D +// **/=0D +=0D +#string STR_MODULE_ABSTRACT #language en-US "instance of RiscV= Cache Maintenance Library"=0D +=0D +#string STR_MODULE_DESCRIPTION #language en-US "instance of Riscv= Cache Maintenance Library."=0D diff --git a/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S = b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S new file mode 100644 index 000000000000..0cf054da7703 --- /dev/null +++ b/MdePkg/Library/RiscVCMOCacheMaintenanceLib/RiscVCpuCMOCache.S @@ -0,0 +1,23 @@ +// -----------------------------------------------------------------------= -------=0D +//=0D +// CpuPause for RISC-V=0D +//=0D +// Copyright (c) 2022, Rivos Inc. All rights reserved.
=0D +//=0D +// SPDX-License-Identifier: BSD-2-Clause-Patent=0D +//=0D +// -----------------------------------------------------------------------= -------=0D +ASM_GLOBAL ASM_PFX (RiscVCpuCacheFlush)=0D +ASM_PFX (RiscVCpuCacheFlush) :=0D + cbo.flush (a0)=0D + ret=0D +=0D +ASM_GLOBAL ASM_PFX (RiscVCpuCacheClean)=0D +ASM_PFX (RiscVCpuCacheClean) :=0D + cbo.clean (a0)=0D + ret=0D +=0D +ASM_GLOBAL ASM_PFX (RiscVCpuCacheInval)=0D +ASM_PFX (RiscVCpuCacheInval) :=0D + cbo.inval (a0)=0D + ret=0D --=20 2.40.0.rc0.57.g454dfcbddf