public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Michael D Kinney" <michael.d.kinney@intel.com>
To: Chao Li <lichao@loongson.cn>,
	"devel@edk2.groups.io" <devel@edk2.groups.io>,
	"Kinney, Michael D" <michael.d.kinney@intel.com>
Cc: "Gao, Liming" <gaoliming@byosoft.com.cn>,
	"Liu, Zhiguang" <zhiguang.liu@intel.com>
Subject: Re: [PATCH v1 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation.
Date: Fri, 9 Sep 2022 17:21:28 +0000	[thread overview]
Message-ID: <CO1PR11MB4929470D1E70175CC41DAE61D2439@CO1PR11MB4929.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20220908045147.1191782-1-lichao@loongson.cn>

Recommend avoiding use of inline assembly.  Can you convert to .S?

Mike

> -----Original Message-----
> From: Chao Li <lichao@loongson.cn>
> Sent: Wednesday, September 7, 2022 9:52 PM
> To: devel@edk2.groups.io
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Gao, Liming <gaoliming@byosoft.com.cn>; Liu, Zhiguang
> <zhiguang.liu@intel.com>
> Subject: [PATCH v1 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation.
> 
> Implement LoongArch cache maintenance functions in
> BaseCacheMaintenanceLib.
> 
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> Cc: Zhiguang Liu <zhiguang.liu@intel.com>
> 
> Signed-off-by: Chao Li <lichao@loongson.cn>
> ---
>  .../BaseCacheMaintenanceLib.inf               |   4 +
>  .../BaseCacheMaintenanceLib/LoongArchCache.c  | 252 ++++++++++++++++++
>  2 files changed, 256 insertions(+)
>  create mode 100644 MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
> 
> diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> index 33114243d5..e103705b2c 100644
> --- a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> +++ b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
> @@ -7,6 +7,7 @@
>  #  Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR>
> 
>  #  Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
> 
>  #  Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All rights reserved.<BR>
> 
> +#  Copyright (c) 2022, Loongson Technology Corporation Limited. All rights reserved.<BR>
> 
>  #
> 
>  #  SPDX-License-Identifier: BSD-2-Clause-Patent
> 
>  #
> 
> @@ -45,6 +46,9 @@
>  [Sources.RISCV64]
> 
>    RiscVCache.c
> 
> 
> 
> +[Sources.LOONGARCH64]
> 
> +  LoongArchCache.c
> 
> +
> 
>  [Packages]
> 
>    MdePkg/MdePkg.dec
> 
> 
> 
> diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
> new file mode 100644
> index 0000000000..067b5def55
> --- /dev/null
> +++ b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c
> @@ -0,0 +1,252 @@
> +/** @file
> 
> +  Cache Maintenance Functions for LoongArch.
> 
> +  LoongArch cache maintenance functions has not yet been completed, and will added in later.
> 
> +  Functions are null functions now.
> 
> +
> 
> +  Copyright (c) 2022, Loongson Technology Corporation Limited. All rights reserved.<BR>
> 
> +
> 
> +  SPDX-License-Identifier: BSD-2-Clause-Patent
> 
> +
> 
> +**/
> 
> +
> 
> +//
> 
> +// Include common header file for this module.
> 
> +//
> 
> +#include <Base.h>
> 
> +#include <Library/BaseLib.h>
> 
> +#include <Library/DebugLib.h>
> 
> +
> 
> +/**
> 
> +  Invalidates the entire instruction cache in cache coherency domain of the
> 
> +  calling CPU.
> 
> +
> 
> +**/
> 
> +VOID
> 
> +EFIAPI
> 
> +InvalidateInstructionCache (
> 
> +  VOID
> 
> +  )
> 
> +{
> 
> +  __asm__ __volatile__ (
> 
> +     "ibar 0\n"
> 
> +     :
> 
> +     :
> 
> +  );
> 
> +}
> 
> +
> 
> +/**
> 
> +  Invalidates a range of instruction cache lines in the cache coherency domain
> 
> +  of the calling CPU.
> 
> +
> 
> +  Invalidates the instruction cache lines specified by Address and Length. If
> 
> +  Address is not aligned on a cache line boundary, then entire instruction
> 
> +  cache line containing Address is invalidated. If Address + Length is not
> 
> +  aligned on a cache line boundary, then the entire instruction cache line
> 
> +  containing Address + Length -1 is invalidated. This function may choose to
> 
> +  invalidate the entire instruction cache if that is more efficient than
> 
> +  invalidating the specified range. If Length is 0, the no instruction cache
> 
> +  lines are invalidated. Address is returned.
> 
> +
> 
> +  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
> 
> +
> 
> +  @param[in]  Address The base address of the instruction cache lines to
> 
> +                  invalidate. If the CPU is in a physical addressing mode, then
> 
> +                  Address is a physical address. If the CPU is in a virtual
> 
> +                  addressing mode, then Address is a virtual address.
> 
> +
> 
> +  @param[in]  Length  The number of bytes to invalidate from the instruction cache.
> 
> +
> 
> +  @return Address.
> 
> +
> 
> +**/
> 
> +VOID *
> 
> +EFIAPI
> 
> +InvalidateInstructionCacheRange (
> 
> +  IN       VOID   *Address,
> 
> +  IN       UINTN  Length
> 
> +  )
> 
> +{
> 
> +  __asm__ __volatile__ (
> 
> +     "ibar 0\n"
> 
> +     :
> 
> +     :
> 
> +  );
> 
> +  return Address;
> 
> +}
> 
> +
> 
> +/**
> 
> +  Writes Back and Invalidates the entire data cache in cache coherency domain
> 
> +  of the calling CPU.
> 
> +
> 
> +  Writes Back and Invalidates the entire data cache in cache coherency domain
> 
> +  of the calling CPU. This function guarantees that all dirty cache lines are
> 
> +  written back to system memory, and also invalidates all the data cache lines
> 
> +  in the cache coherency domain of the calling CPU.
> 
> +
> 
> +**/
> 
> +VOID
> 
> +EFIAPI
> 
> +WriteBackInvalidateDataCache (
> 
> +  VOID
> 
> +  )
> 
> +{
> 
> +  DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
> 
> +}
> 
> +
> 
> +/**
> 
> +  Writes Back and Invalidates a range of data cache lines in the cache
> 
> +  coherency domain of the calling CPU.
> 
> +
> 
> +  Writes Back and Invalidate the data cache lines specified by Address and
> 
> +  Length. If Address is not aligned on a cache line boundary, then entire data
> 
> +  cache line containing Address is written back and invalidated. If Address +
> 
> +  Length is not aligned on a cache line boundary, then the entire data cache
> 
> +  line containing Address + Length -1 is written back and invalidated. This
> 
> +  function may choose to write back and invalidate the entire data cache if
> 
> +  that is more efficient than writing back and invalidating the specified
> 
> +  range. If Length is 0, the no data cache lines are written back and
> 
> +  invalidated. Address is returned.
> 
> +
> 
> +  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
> 
> +
> 
> +  @param[in]  Address The base address of the data cache lines to write back and
> 
> +                  invalidate. If the CPU is in a physical addressing mode, then
> 
> +                  Address is a physical address. If the CPU is in a virtual
> 
> +                  addressing mode, then Address is a virtual address.
> 
> +  @param[in]  Length  The number of bytes to write back and invalidate from the
> 
> +                  data cache.
> 
> +
> 
> +  @return Address of cache invalidation.
> 
> +
> 
> +**/
> 
> +VOID *
> 
> +EFIAPI
> 
> +WriteBackInvalidateDataCacheRange (
> 
> +  IN      VOID   *Address,
> 
> +  IN      UINTN  Length
> 
> +  )
> 
> +{
> 
> +  DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
> 
> +  return Address;
> 
> +}
> 
> +
> 
> +/**
> 
> +  Writes Back the entire data cache in cache coherency domain of the calling
> 
> +  CPU.
> 
> +
> 
> +  Writes Back the entire data cache in cache coherency domain of the calling
> 
> +  CPU. This function guarantees that all dirty cache lines are written back to
> 
> +  system memory. This function may also invalidate all the data cache lines in
> 
> +  the cache coherency domain of the calling CPU.
> 
> +
> 
> +**/
> 
> +VOID
> 
> +EFIAPI
> 
> +WriteBackDataCache (
> 
> +  VOID
> 
> +  )
> 
> +{
> 
> +  WriteBackInvalidateDataCache ();
> 
> +}
> 
> +
> 
> +/**
> 
> +  Writes Back a range of data cache lines in the cache coherency domain of the
> 
> +  calling CPU.
> 
> +
> 
> +  Writes Back the data cache lines specified by Address and Length. If Address
> 
> +  is not aligned on a cache line boundary, then entire data cache line
> 
> +  containing Address is written back. If Address + Length is not aligned on a
> 
> +  cache line boundary, then the entire data cache line containing Address +
> 
> +  Length -1 is written back. This function may choose to write back the entire
> 
> +  data cache if that is more efficient than writing back the specified range.
> 
> +  If Length is 0, the no data cache lines are written back. This function may
> 
> +  also invalidate all the data cache lines in the specified range of the cache
> 
> +  coherency domain of the calling CPU. Address is returned.
> 
> +
> 
> +  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
> 
> +
> 
> +  @param[in]  Address The base address of the data cache lines to write back. If
> 
> +                  the CPU is in a physical addressing mode, then Address is a
> 
> +                  physical address. If the CPU is in a virtual addressing
> 
> +                  mode, then Address is a virtual address.
> 
> +  @param[in]  Length  The number of bytes to write back from the data cache.
> 
> +
> 
> +  @return Address of cache written in main memory.
> 
> +
> 
> +**/
> 
> +VOID *
> 
> +EFIAPI
> 
> +WriteBackDataCacheRange (
> 
> +  IN      VOID   *Address,
> 
> +  IN      UINTN  Length
> 
> +  )
> 
> +{
> 
> +  DEBUG ((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __FUNCTION__));
> 
> +  return Address;
> 
> +}
> 
> +
> 
> +/**
> 
> +  Invalidates the entire data cache in cache coherency domain of the calling
> 
> +  CPU.
> 
> +
> 
> +  Invalidates the entire data cache in cache coherency domain of the calling
> 
> +  CPU. This function must be used with care because dirty cache lines are not
> 
> +  written back to system memory. It is typically used for cache diagnostics. If
> 
> +  the CPU does not support invalidation of the entire data cache, then a write
> 
> +  back and invalidate operation should be performed on the entire data cache.
> 
> +
> 
> +**/
> 
> +VOID
> 
> +EFIAPI
> 
> +InvalidateDataCache (
> 
> +  VOID
> 
> +  )
> 
> +{
> 
> +  __asm__ __volatile__ (
> 
> +     "dbar 0\n"
> 
> +     :
> 
> +     :
> 
> +  );
> 
> +}
> 
> +
> 
> +/**
> 
> +  Invalidates a range of data cache lines in the cache coherency domain of the
> 
> +  calling CPU.
> 
> +
> 
> +  Invalidates the data cache lines specified by Address and Length. If Address
> 
> +  is not aligned on a cache line boundary, then entire data cache line
> 
> +  containing Address is invalidated. If Address + Length is not aligned on a
> 
> +  cache line boundary, then the entire data cache line containing Address +
> 
> +  Length -1 is invalidated. This function must never invalidate any cache lines
> 
> +  outside the specified range. If Length is 0, the no data cache lines are
> 
> +  invalidated. Address is returned. This function must be used with care
> 
> +  because dirty cache lines are not written back to system memory. It is
> 
> +  typically used for cache diagnostics. If the CPU does not support
> 
> +  invalidation of a data cache range, then a write back and invalidate
> 
> +  operation should be performed on the data cache range.
> 
> +
> 
> +  If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().
> 
> +
> 
> +  @param[in]  Address The base address of the data cache lines to invalidate. If
> 
> +                  the CPU is in a physical addressing mode, then Address is a
> 
> +                  physical address. If the CPU is in a virtual addressing mode,
> 
> +                  then Address is a virtual address.
> 
> +  @param[in]  Length  The number of bytes to invalidate from the data cache.
> 
> +
> 
> +  @return Address.
> 
> +
> 
> +**/
> 
> +VOID *
> 
> +EFIAPI
> 
> +InvalidateDataCacheRange (
> 
> +  IN      VOID   *Address,
> 
> +  IN      UINTN  Length
> 
> +  )
> 
> +{
> 
> +  __asm__ __volatile__ (
> 
> +     "dbar 0\n"
> 
> +     :
> 
> +     :
> 
> +  );
> 
> +  return Address;
> 
> +}
> 
> --
> 2.27.0


  reply	other threads:[~2022-09-09 17:21 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-08  4:51 [PATCH v1 24/34] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation Chao Li
2022-09-09 17:21 ` Michael D Kinney [this message]
2022-09-11  4:18   ` [edk2-devel] " Chao Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CO1PR11MB4929470D1E70175CC41DAE61D2439@CO1PR11MB4929.namprd11.prod.outlook.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox