* [PATCH v2 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation.
@ 2022-09-14 9:41 Chao Li
2022-09-23 4:45 ` Chao Li
2022-09-23 15:42 ` Michael D Kinney
0 siblings, 2 replies; 3+ messages in thread
From: Chao Li @ 2022-09-14 9:41 UTC (permalink / raw)
To: devel; +Cc: Michael D Kinney, Liming Gao, Zhiguang Liu
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4053
Implement LoongArch cache maintenance functions in
BaseCacheMaintenanceLib.
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Signed-off-by: Chao Li <lichao@loongson.cn>
---
.../BaseCacheMaintenanceLib.inf | 6 +-
.../BaseCacheMaintenanceLib/LoongArchCache.c | 254 ++++++++++++++++++
2 files changed, 259 insertions(+), 1 deletion(-)
create mode 100644 MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
index 33114243d5..6fd9cbe5f6 100644
--- a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
+++ b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
@@ -7,6 +7,7 @@
# Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR>
# Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
# Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All rights reserved.<BR>
+# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights reserved.<BR>
#
# SPDX-License-Identifier: BSD-2-Clause-Patent
#
@@ -24,7 +25,7 @@
#
-# VALID_ARCHITECTURES = IA32 X64 EBC ARM AARCH64
+# VALID_ARCHITECTURES = IA32 X64 EBC ARM AARCH64 RISCV64 LOONGARCH64
#
[Sources.IA32]
@@ -45,6 +46,9 @@
[Sources.RISCV64]
RiscVCache.c
+[Sources.LOONGARCH64]
+ LoongArchCache.c
+
[Packages]
MdePkg/MdePkg.dec
diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
new file mode 100644
index 0000000000..4c8773278c
--- /dev/null
+++ b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
@@ -0,0 +1,254 @@
+/** @file
+ Cache Maintenance Functions for LoongArch.
+ LoongArch cache maintenance functions has not yet been completed, and will added in later.
+ Functions are null functions now.
+
+ Copyright (c) 2022, Loongson Technology Corporation Limited. All rights reserved.<BR>
+
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+//
+// Include common header file for this module.
+//
+#include <Base.h>
+#include <Library/BaseLib.h>
+#include <Library/DebugLib.h>
+
+/**
+ LoongArch data barrier operation.
+**/
+VOID
+EFIAPI
+AsmDataBarrierLoongArch (
+ VOID
+ );
+
+/**
+ LoongArch instruction barrier operation.
+**/
+VOID
+EFIAPI
+AsmInstructionBarrierLoongArch (
+ VOID
+ );
+
+/**
+ Invalidates the entire instruction cache in cache coherency domain of the
+ calling CPU.
+
+**/
+VOID
+EFIAPI
+InvalidateInstructionCache (
+ VOID
+ )
+{
+ AsmInstructionBarrierLoongArch ();
+}
+
+/**
+ Invalidates a range of instruction cache lines in the cache coherency domain
+ of the calling CPU.
+
+ Invalidates the instruction cache lines specified by Address and Length. If
+ Address is not aligned on a cache line boundary, then entire instruction
+ cache line containing Address is invalidated. If Address + Length is not
+ aligned on a cache line boundary, then the entire instruction cache line
+ containing Address + Length -1 is invalidated. This function may choose to
+ invalidate the entire instruction cache if that is more efficient than
+ invalidating the specified range. If Length is 0, the no instruction cache
+ lines are invalidated. Address is returned.
+
+ If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+ @param[in] Address The base address of the instruction cache lines to
+ invalidate. If the CPU is in a physical addressing mode, then
+ Address is a physical address. If the CPU is in a virtual
+ addressing mode, then Address is a virtual address.
+
+ @param[in] Length The number of bytes to invalidate from the instruction cache.
+
+ @return Address.
+
+**/
+VOID *
+EFIAPI
+InvalidateInstructionCacheRange (
+ IN VOID *Address,
+ IN UINTN Length
+ )
+{
+ AsmInstructionBarrierLoongArch ();
+ return Address;
+}
+
+/**
+ Writes Back and Invalidates the entire data cache in cache coherency domain
+ of the calling CPU.
+
+ Writes Back and Invalidates the entire data cache in cache coherency domain
+ of the calling CPU. This function guarantees that all dirty cache lines are
+ written back to system memory, and also invalidates all the data cache lines
+ in the cache coherency domain of the calling CPU.
+
+**/
+VOID
+EFIAPI
+WriteBackInvalidateDataCache (
+ VOID
+ )
+{
+ DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
+}
+
+/**
+ Writes Back and Invalidates a range of data cache lines in the cache
+ coherency domain of the calling CPU.
+
+ Writes Back and Invalidate the data cache lines specified by Address and
+ Length. If Address is not aligned on a cache line boundary, then entire data
+ cache line containing Address is written back and invalidated. If Address +
+ Length is not aligned on a cache line boundary, then the entire data cache
+ line containing Address + Length -1 is written back and invalidated. This
+ function may choose to write back and invalidate the entire data cache if
+ that is more efficient than writing back and invalidating the specified
+ range. If Length is 0, the no data cache lines are written back and
+ invalidated. Address is returned.
+
+ If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+ @param[in] Address The base address of the data cache lines to write back and
+ invalidate. If the CPU is in a physical addressing mode, then
+ Address is a physical address. If the CPU is in a virtual
+ addressing mode, then Address is a virtual address.
+ @param[in] Length The number of bytes to write back and invalidate from the
+ data cache.
+
+ @return Address of cache invalidation.
+
+**/
+VOID *
+EFIAPI
+WriteBackInvalidateDataCacheRange (
+ IN VOID *Address,
+ IN UINTN Length
+ )
+{
+ DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
+ return Address;
+}
+
+/**
+ Writes Back the entire data cache in cache coherency domain of the calling
+ CPU.
+
+ Writes Back the entire data cache in cache coherency domain of the calling
+ CPU. This function guarantees that all dirty cache lines are written back to
+ system memory. This function may also invalidate all the data cache lines in
+ the cache coherency domain of the calling CPU.
+
+**/
+VOID
+EFIAPI
+WriteBackDataCache (
+ VOID
+ )
+{
+ WriteBackInvalidateDataCache ();
+}
+
+/**
+ Writes Back a range of data cache lines in the cache coherency domain of the
+ calling CPU.
+
+ Writes Back the data cache lines specified by Address and Length. If Address
+ is not aligned on a cache line boundary, then entire data cache line
+ containing Address is written back. If Address + Length is not aligned on a
+ cache line boundary, then the entire data cache line containing Address +
+ Length -1 is written back. This function may choose to write back the entire
+ data cache if that is more efficient than writing back the specified range.
+ If Length is 0, the no data cache lines are written back. This function may
+ also invalidate all the data cache lines in the specified range of the cache
+ coherency domain of the calling CPU. Address is returned.
+
+ If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+ @param[in] Address The base address of the data cache lines to write back. If
+ the CPU is in a physical addressing mode, then Address is a
+ physical address. If the CPU is in a virtual addressing
+ mode, then Address is a virtual address.
+ @param[in] Length The number of bytes to write back from the data cache.
+
+ @return Address of cache written in main memory.
+
+**/
+VOID *
+EFIAPI
+WriteBackDataCacheRange (
+ IN VOID *Address,
+ IN UINTN Length
+ )
+{
+ DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
+ return Address;
+}
+
+/**
+ Invalidates the entire data cache in cache coherency domain of the calling
+ CPU.
+
+ Invalidates the entire data cache in cache coherency domain of the calling
+ CPU. This function must be used with care because dirty cache lines are not
+ written back to system memory. It is typically used for cache diagnostics. If
+ the CPU does not support invalidation of the entire data cache, then a write
+ back and invalidate operation should be performed on the entire data cache.
+
+**/
+VOID
+EFIAPI
+InvalidateDataCache (
+ VOID
+ )
+{
+ AsmDataBarrierLoongArch ();
+}
+
+/**
+ Invalidates a range of data cache lines in the cache coherency domain of the
+ calling CPU.
+
+ Invalidates the data cache lines specified by Address and Length. If Address
+ is not aligned on a cache line boundary, then entire data cache line
+ containing Address is invalidated. If Address + Length is not aligned on a
+ cache line boundary, then the entire data cache line containing Address +
+ Length -1 is invalidated. This function must never invalidate any cache lines
+ outside the specified range. If Length is 0, the no data cache lines are
+ invalidated. Address is returned. This function must be used with care
+ because dirty cache lines are not written back to system memory. It is
+ typically used for cache diagnostics. If the CPU does not support
+ invalidation of a data cache range, then a write back and invalidate
+ operation should be performed on the data cache range.
+
+ If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
+
+ @param[in] Address The base address of the data cache lines to invalidate. If
+ the CPU is in a physical addressing mode, then Address is a
+ physical address. If the CPU is in a virtual addressing mode,
+ then Address is a virtual address.
+ @param[in] Length The number of bytes to invalidate from the data cache.
+
+ @return Address.
+
+**/
+VOID *
+EFIAPI
+InvalidateDataCacheRange (
+ IN VOID *Address,
+ IN UINTN Length
+ )
+{
+ AsmDataBarrierLoongArch ();
+ return Address;
+}
--
2.27.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v2 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation.
2022-09-14 9:41 [PATCH v2 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation Chao Li
@ 2022-09-23 4:45 ` Chao Li
2022-09-23 15:42 ` Michael D Kinney
1 sibling, 0 replies; 3+ messages in thread
From: Chao Li @ 2022-09-23 4:45 UTC (permalink / raw)
To: Michael D Kinney; +Cc: Liming Gao, Zhiguang Liu, devel@edk2.groups.io
[-- Attachment #1: Type: text/plain, Size: 11124 bytes --]
Hi Mike,
I have converted the cache opeartion to .S, the implementation please refer to the patch 0023, can you review it again?
Thanks,
Chao
--------
On 9月 14 2022, at 5:41 下午, Chao Li <lichao@loongson.cn> wrote:
> REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4053
>
> Implement LoongArch cache maintenance functions in
> BaseCacheMaintenanceLib.
>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> Cc: Zhiguang Liu <zhiguang.liu@intel.com>
>
> Signed-off-by: Chao Li <lichao@loongson.cn>
> ---
> .../BaseCacheMaintenanceLib.inf | 6 +-
> .../BaseCacheMaintenanceLib/LoongArchCache.c | 254 ++++++++++++++++++
> 2 files changed, 259 insertions(+), 1 deletion(-)
> create mode 100644 MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
>
> diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> index 33114243d5..6fd9cbe5f6 100644
> --- a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> +++ b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> @@ -7,6 +7,7 @@
> # Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR>
>
> # Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
> # Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All rights reserved.<BR>
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights reserved.<BR>
> #
> # SPDX-License-Identifier: BSD-2-Clause-Patent
> #
> @@ -24,7 +25,7 @@
>
>
>
> #
> -# VALID_ARCHITECTURES = IA32 X64 EBC ARM AARCH64
> +# VALID_ARCHITECTURES = IA32 X64 EBC ARM AARCH64 RISCV64 LOONGARCH64
> #
>
>
> [Sources.IA32]
> @@ -45,6 +46,9 @@
> [Sources.RISCV64]
>
> RiscVCache.c
>
>
> +[Sources.LOONGARCH64]
> + LoongArchCache.c
> +
> [Packages]
> MdePkg/MdePkg.dec
>
>
> diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
> new file mode 100644
> index 0000000000..4c8773278c
> --- /dev/null
> +++ b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
> @@ -0,0 +1,254 @@
> +/** @file
>
> + Cache Maintenance Functions for LoongArch.
> + LoongArch cache maintenance functions has not yet been completed, and will added in later.
> + Functions are null functions now.
> +
> + Copyright (c) 2022, Loongson Technology Corporation Limited. All rights reserved.<BR>
> +
> + SPDX-License-Identifier: BSD-2-Clause-Patent
> +
> +**/
> +
> +//
> +// Include common header file for this module.
> +//
> +#include <Base.h>
> +#include <Library/BaseLib.h>
> +#include <Library/DebugLib.h>
> +
> +/**
> + LoongArch data barrier operation.
> +**/
> +VOID
> +EFIAPI
> +AsmDataBarrierLoongArch (
> + VOID
> + );
> +
> +/**
> + LoongArch instruction barrier operation.
> +**/
> +VOID
> +EFIAPI
> +AsmInstructionBarrierLoongArch (
> + VOID
> + );
> +
> +/**
> + Invalidates the entire instruction cache in cache coherency domain of the
> + calling CPU.
> +
> +**/
> +VOID
> +EFIAPI
> +InvalidateInstructionCache (
> + VOID
> + )
> +{
> + AsmInstructionBarrierLoongArch ();
> +}
> +
> +/**
> + Invalidates a range of instruction cache lines in the cache coherency domain
> + of the calling CPU.
> +
> + Invalidates the instruction cache lines specified by Address and Length. If
> + Address is not aligned on a cache line boundary, then entire instruction
> + cache line containing Address is invalidated. If Address + Length is not
> + aligned on a cache line boundary, then the entire instruction cache line
> + containing Address + Length -1 is invalidated. This function may choose to
> + invalidate the entire instruction cache if that is more efficient than
> + invalidating the specified range. If Length is 0, the no instruction cache
> + lines are invalidated. Address is returned.
> +
> + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
> +
> + @param[in] Address The base address of the instruction cache lines to
> + invalidate. If the CPU is in a physical addressing mode, then
> + Address is a physical address. If the CPU is in a virtual
> + addressing mode, then Address is a virtual address.
> +
> + @param[in] Length The number of bytes to invalidate from the instruction cache.
> +
> + @return Address.
> +
> +**/
> +VOID *
> +EFIAPI
> +InvalidateInstructionCacheRange (
> + IN VOID *Address,
> + IN UINTN Length
> + )
> +{
> + AsmInstructionBarrierLoongArch ();
> + return Address;
> +}
> +
> +/**
> + Writes Back and Invalidates the entire data cache in cache coherency domain
> + of the calling CPU.
> +
> + Writes Back and Invalidates the entire data cache in cache coherency domain
> + of the calling CPU. This function guarantees that all dirty cache lines are
> + written back to system memory, and also invalidates all the data cache lines
> + in the cache coherency domain of the calling CPU.
> +
> +**/
> +VOID
> +EFIAPI
> +WriteBackInvalidateDataCache (
> + VOID
> + )
> +{
> + DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
> +}
> +
> +/**
> + Writes Back and Invalidates a range of data cache lines in the cache
> + coherency domain of the calling CPU.
> +
> + Writes Back and Invalidate the data cache lines specified by Address and
> + Length. If Address is not aligned on a cache line boundary, then entire data
> + cache line containing Address is written back and invalidated. If Address +
> + Length is not aligned on a cache line boundary, then the entire data cache
> + line containing Address + Length -1 is written back and invalidated. This
> + function may choose to write back and invalidate the entire data cache if
> + that is more efficient than writing back and invalidating the specified
> + range. If Length is 0, the no data cache lines are written back and
> + invalidated. Address is returned.
> +
> + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
> +
> + @param[in] Address The base address of the data cache lines to write back and
> + invalidate. If the CPU is in a physical addressing mode, then
> + Address is a physical address. If the CPU is in a virtual
> + addressing mode, then Address is a virtual address.
> + @param[in] Length The number of bytes to write back and invalidate from the
> + data cache.
> +
> + @return Address of cache invalidation.
> +
> +**/
> +VOID *
> +EFIAPI
> +WriteBackInvalidateDataCacheRange (
> + IN VOID *Address,
> + IN UINTN Length
> + )
> +{
> + DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
> + return Address;
> +}
> +
> +/**
> + Writes Back the entire data cache in cache coherency domain of the calling
> + CPU.
> +
> + Writes Back the entire data cache in cache coherency domain of the calling
> + CPU. This function guarantees that all dirty cache lines are written back to
> + system memory. This function may also invalidate all the data cache lines in
> + the cache coherency domain of the calling CPU.
> +
> +**/
> +VOID
> +EFIAPI
> +WriteBackDataCache (
> + VOID
> + )
> +{
> + WriteBackInvalidateDataCache ();
> +}
> +
> +/**
> + Writes Back a range of data cache lines in the cache coherency domain of the
> + calling CPU.
> +
> + Writes Back the data cache lines specified by Address and Length. If Address
> + is not aligned on a cache line boundary, then entire data cache line
> + containing Address is written back. If Address + Length is not aligned on a
> + cache line boundary, then the entire data cache line containing Address +
> + Length -1 is written back. This function may choose to write back the entire
> + data cache if that is more efficient than writing back the specified range.
> + If Length is 0, the no data cache lines are written back. This function may
> + also invalidate all the data cache lines in the specified range of the cache
> + coherency domain of the calling CPU. Address is returned.
> +
> + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
> +
> + @param[in] Address The base address of the data cache lines to write back. If
> + the CPU is in a physical addressing mode, then Address is a
> + physical address. If the CPU is in a virtual addressing
> + mode, then Address is a virtual address.
> + @param[in] Length The number of bytes to write back from the data cache.
> +
> + @return Address of cache written in main memory.
> +
> +**/
> +VOID *
> +EFIAPI
> +WriteBackDataCacheRange (
> + IN VOID *Address,
> + IN UINTN Length
> + )
> +{
> + DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
> + return Address;
> +}
> +
> +/**
> + Invalidates the entire data cache in cache coherency domain of the calling
> + CPU.
> +
> + Invalidates the entire data cache in cache coherency domain of the calling
> + CPU. This function must be used with care because dirty cache lines are not
> + written back to system memory. It is typically used for cache diagnostics. If
> + the CPU does not support invalidation of the entire data cache, then a write
> + back and invalidate operation should be performed on the entire data cache.
> +
> +**/
> +VOID
> +EFIAPI
> +InvalidateDataCache (
> + VOID
> + )
> +{
> + AsmDataBarrierLoongArch ();
> +}
> +
> +/**
> + Invalidates a range of data cache lines in the cache coherency domain of the
> + calling CPU.
> +
> + Invalidates the data cache lines specified by Address and Length. If Address
> + is not aligned on a cache line boundary, then entire data cache line
> + containing Address is invalidated. If Address + Length is not aligned on a
> + cache line boundary, then the entire data cache line containing Address +
> + Length -1 is invalidated. This function must never invalidate any cache lines
> + outside the specified range. If Length is 0, the no data cache lines are
> + invalidated. Address is returned. This function must be used with care
> + because dirty cache lines are not written back to system memory. It is
> + typically used for cache diagnostics. If the CPU does not support
> + invalidation of a data cache range, then a write back and invalidate
> + operation should be performed on the data cache range.
> +
> + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
> +
> + @param[in] Address The base address of the data cache lines to invalidate. If
> + the CPU is in a physical addressing mode, then Address is a
> + physical address. If the CPU is in a virtual addressing mode,
> + then Address is a virtual address.
> + @param[in] Length The number of bytes to invalidate from the data cache.
> +
> + @return Address.
> +
> +**/
> +VOID *
> +EFIAPI
> +InvalidateDataCacheRange (
> + IN VOID *Address,
> + IN UINTN Length
> + )
> +{
> + AsmDataBarrierLoongArch ();
> + return Address;
> +}
> --
> 2.27.0
>
[-- Attachment #2: Type: text/html, Size: 14608 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v2 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation.
2022-09-14 9:41 [PATCH v2 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation Chao Li
2022-09-23 4:45 ` Chao Li
@ 2022-09-23 15:42 ` Michael D Kinney
1 sibling, 0 replies; 3+ messages in thread
From: Michael D Kinney @ 2022-09-23 15:42 UTC (permalink / raw)
To: Chao Li, devel@edk2.groups.io, Kinney, Michael D
Cc: Gao, Liming, Liu, Zhiguang
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
> -----Original Message-----
> From: Chao Li <lichao@loongson.cn>
> Sent: Wednesday, September 14, 2022 2:41 AM
> To: devel@edk2.groups.io
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Gao, Liming <gaoliming@byosoft.com.cn>; Liu, Zhiguang <zhiguang.liu@intel.com>
> Subject: [PATCH v2 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation.
>
> REF: https://bugzilla.tianocore.org/show_bug.cgi?id=4053
>
> Implement LoongArch cache maintenance functions in
> BaseCacheMaintenanceLib.
>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> Cc: Zhiguang Liu <zhiguang.liu@intel.com>
>
> Signed-off-by: Chao Li <lichao@loongson.cn>
> ---
> .../BaseCacheMaintenanceLib.inf | 6 +-
> .../BaseCacheMaintenanceLib/LoongArchCache.c | 254 ++++++++++++++++++
> 2 files changed, 259 insertions(+), 1 deletion(-)
> create mode 100644 MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
>
> diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> index 33114243d5..6fd9cbe5f6 100644
> --- a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> +++ b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> @@ -7,6 +7,7 @@
> # Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR>
>
> # Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
>
> # Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All rights reserved.<BR>
>
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights reserved.<BR>
>
> #
>
> # SPDX-License-Identifier: BSD-2-Clause-Patent
>
> #
>
> @@ -24,7 +25,7 @@
>
>
>
>
> #
>
> -# VALID_ARCHITECTURES = IA32 X64 EBC ARM AARCH64
>
> +# VALID_ARCHITECTURES = IA32 X64 EBC ARM AARCH64 RISCV64 LOONGARCH64
>
> #
>
>
>
> [Sources.IA32]
>
> @@ -45,6 +46,9 @@
> [Sources.RISCV64]
>
> RiscVCache.c
>
>
>
> +[Sources.LOONGARCH64]
>
> + LoongArchCache.c
>
> +
>
> [Packages]
>
> MdePkg/MdePkg.dec
>
>
>
> diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
> new file mode 100644
> index 0000000000..4c8773278c
> --- /dev/null
> +++ b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
> @@ -0,0 +1,254 @@
> +/** @file
>
> + Cache Maintenance Functions for LoongArch.
>
> + LoongArch cache maintenance functions has not yet been completed, and will added in later.
>
> + Functions are null functions now.
>
> +
>
> + Copyright (c) 2022, Loongson Technology Corporation Limited. All rights reserved.<BR>
>
> +
>
> + SPDX-License-Identifier: BSD-2-Clause-Patent
>
> +
>
> +**/
>
> +
>
> +//
>
> +// Include common header file for this module.
>
> +//
>
> +#include <Base.h>
>
> +#include <Library/BaseLib.h>
>
> +#include <Library/DebugLib.h>
>
> +
>
> +/**
>
> + LoongArch data barrier operation.
>
> +**/
>
> +VOID
>
> +EFIAPI
>
> +AsmDataBarrierLoongArch (
>
> + VOID
>
> + );
>
> +
>
> +/**
>
> + LoongArch instruction barrier operation.
>
> +**/
>
> +VOID
>
> +EFIAPI
>
> +AsmInstructionBarrierLoongArch (
>
> + VOID
>
> + );
>
> +
>
> +/**
>
> + Invalidates the entire instruction cache in cache coherency domain of the
>
> + calling CPU.
>
> +
>
> +**/
>
> +VOID
>
> +EFIAPI
>
> +InvalidateInstructionCache (
>
> + VOID
>
> + )
>
> +{
>
> + AsmInstructionBarrierLoongArch ();
>
> +}
>
> +
>
> +/**
>
> + Invalidates a range of instruction cache lines in the cache coherency domain
>
> + of the calling CPU.
>
> +
>
> + Invalidates the instruction cache lines specified by Address and Length. If
>
> + Address is not aligned on a cache line boundary, then entire instruction
>
> + cache line containing Address is invalidated. If Address + Length is not
>
> + aligned on a cache line boundary, then the entire instruction cache line
>
> + containing Address + Length -1 is invalidated. This function may choose to
>
> + invalidate the entire instruction cache if that is more efficient than
>
> + invalidating the specified range. If Length is 0, the no instruction cache
>
> + lines are invalidated. Address is returned.
>
> +
>
> + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
>
> +
>
> + @param[in] Address The base address of the instruction cache lines to
>
> + invalidate. If the CPU is in a physical addressing mode, then
>
> + Address is a physical address. If the CPU is in a virtual
>
> + addressing mode, then Address is a virtual address.
>
> +
>
> + @param[in] Length The number of bytes to invalidate from the instruction cache.
>
> +
>
> + @return Address.
>
> +
>
> +**/
>
> +VOID *
>
> +EFIAPI
>
> +InvalidateInstructionCacheRange (
>
> + IN VOID *Address,
>
> + IN UINTN Length
>
> + )
>
> +{
>
> + AsmInstructionBarrierLoongArch ();
>
> + return Address;
>
> +}
>
> +
>
> +/**
>
> + Writes Back and Invalidates the entire data cache in cache coherency domain
>
> + of the calling CPU.
>
> +
>
> + Writes Back and Invalidates the entire data cache in cache coherency domain
>
> + of the calling CPU. This function guarantees that all dirty cache lines are
>
> + written back to system memory, and also invalidates all the data cache lines
>
> + in the cache coherency domain of the calling CPU.
>
> +
>
> +**/
>
> +VOID
>
> +EFIAPI
>
> +WriteBackInvalidateDataCache (
>
> + VOID
>
> + )
>
> +{
>
> + DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
>
> +}
>
> +
>
> +/**
>
> + Writes Back and Invalidates a range of data cache lines in the cache
>
> + coherency domain of the calling CPU.
>
> +
>
> + Writes Back and Invalidate the data cache lines specified by Address and
>
> + Length. If Address is not aligned on a cache line boundary, then entire data
>
> + cache line containing Address is written back and invalidated. If Address +
>
> + Length is not aligned on a cache line boundary, then the entire data cache
>
> + line containing Address + Length -1 is written back and invalidated. This
>
> + function may choose to write back and invalidate the entire data cache if
>
> + that is more efficient than writing back and invalidating the specified
>
> + range. If Length is 0, the no data cache lines are written back and
>
> + invalidated. Address is returned.
>
> +
>
> + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
>
> +
>
> + @param[in] Address The base address of the data cache lines to write back and
>
> + invalidate. If the CPU is in a physical addressing mode, then
>
> + Address is a physical address. If the CPU is in a virtual
>
> + addressing mode, then Address is a virtual address.
>
> + @param[in] Length The number of bytes to write back and invalidate from the
>
> + data cache.
>
> +
>
> + @return Address of cache invalidation.
>
> +
>
> +**/
>
> +VOID *
>
> +EFIAPI
>
> +WriteBackInvalidateDataCacheRange (
>
> + IN VOID *Address,
>
> + IN UINTN Length
>
> + )
>
> +{
>
> + DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
>
> + return Address;
>
> +}
>
> +
>
> +/**
>
> + Writes Back the entire data cache in cache coherency domain of the calling
>
> + CPU.
>
> +
>
> + Writes Back the entire data cache in cache coherency domain of the calling
>
> + CPU. This function guarantees that all dirty cache lines are written back to
>
> + system memory. This function may also invalidate all the data cache lines in
>
> + the cache coherency domain of the calling CPU.
>
> +
>
> +**/
>
> +VOID
>
> +EFIAPI
>
> +WriteBackDataCache (
>
> + VOID
>
> + )
>
> +{
>
> + WriteBackInvalidateDataCache ();
>
> +}
>
> +
>
> +/**
>
> + Writes Back a range of data cache lines in the cache coherency domain of the
>
> + calling CPU.
>
> +
>
> + Writes Back the data cache lines specified by Address and Length. If Address
>
> + is not aligned on a cache line boundary, then entire data cache line
>
> + containing Address is written back. If Address + Length is not aligned on a
>
> + cache line boundary, then the entire data cache line containing Address +
>
> + Length -1 is written back. This function may choose to write back the entire
>
> + data cache if that is more efficient than writing back the specified range.
>
> + If Length is 0, the no data cache lines are written back. This function may
>
> + also invalidate all the data cache lines in the specified range of the cache
>
> + coherency domain of the calling CPU. Address is returned.
>
> +
>
> + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
>
> +
>
> + @param[in] Address The base address of the data cache lines to write back. If
>
> + the CPU is in a physical addressing mode, then Address is a
>
> + physical address. If the CPU is in a virtual addressing
>
> + mode, then Address is a virtual address.
>
> + @param[in] Length The number of bytes to write back from the data cache.
>
> +
>
> + @return Address of cache written in main memory.
>
> +
>
> +**/
>
> +VOID *
>
> +EFIAPI
>
> +WriteBackDataCacheRange (
>
> + IN VOID *Address,
>
> + IN UINTN Length
>
> + )
>
> +{
>
> + DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
>
> + return Address;
>
> +}
>
> +
>
> +/**
>
> + Invalidates the entire data cache in cache coherency domain of the calling
>
> + CPU.
>
> +
>
> + Invalidates the entire data cache in cache coherency domain of the calling
>
> + CPU. This function must be used with care because dirty cache lines are not
>
> + written back to system memory. It is typically used for cache diagnostics. If
>
> + the CPU does not support invalidation of the entire data cache, then a write
>
> + back and invalidate operation should be performed on the entire data cache.
>
> +
>
> +**/
>
> +VOID
>
> +EFIAPI
>
> +InvalidateDataCache (
>
> + VOID
>
> + )
>
> +{
>
> + AsmDataBarrierLoongArch ();
>
> +}
>
> +
>
> +/**
>
> + Invalidates a range of data cache lines in the cache coherency domain of the
>
> + calling CPU.
>
> +
>
> + Invalidates the data cache lines specified by Address and Length. If Address
>
> + is not aligned on a cache line boundary, then entire data cache line
>
> + containing Address is invalidated. If Address + Length is not aligned on a
>
> + cache line boundary, then the entire data cache line containing Address +
>
> + Length -1 is invalidated. This function must never invalidate any cache lines
>
> + outside the specified range. If Length is 0, the no data cache lines are
>
> + invalidated. Address is returned. This function must be used with care
>
> + because dirty cache lines are not written back to system memory. It is
>
> + typically used for cache diagnostics. If the CPU does not support
>
> + invalidation of a data cache range, then a write back and invalidate
>
> + operation should be performed on the data cache range.
>
> +
>
> + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
>
> +
>
> + @param[in] Address The base address of the data cache lines to invalidate. If
>
> + the CPU is in a physical addressing mode, then Address is a
>
> + physical address. If the CPU is in a virtual addressing mode,
>
> + then Address is a virtual address.
>
> + @param[in] Length The number of bytes to invalidate from the data cache.
>
> +
>
> + @return Address.
>
> +
>
> +**/
>
> +VOID *
>
> +EFIAPI
>
> +InvalidateDataCacheRange (
>
> + IN VOID *Address,
>
> + IN UINTN Length
>
> + )
>
> +{
>
> + AsmDataBarrierLoongArch ();
>
> + return Address;
>
> +}
>
> --
> 2.27.0
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2022-09-23 15:43 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-09-14 9:41 [PATCH v2 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation Chao Li
2022-09-23 4:45 ` Chao Li
2022-09-23 15:42 ` Michael D Kinney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox