From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from loongson.cn (loongson.cn [114.242.206.163]) by mx.groups.io with SMTP id smtpd.web11.23333.1644389750461740593 for ; Tue, 08 Feb 2022 22:55:51 -0800 Authentication-Results: mx.groups.io; dkim=missing; spf=pass (domain: loongson.cn, ip: 114.242.206.163, mailfrom: lichao@loongson.cn) Received: from code-server.gen (unknown [10.2.9.245]) by mail.loongson.cn (Coremail) with SMTP id AQAAf9Dx3+J0ZQNi0IsIAA--.26577S2; Wed, 09 Feb 2022 14:55:48 +0800 (CST) From: "Chao Li" To: devel@edk2.groups.io Cc: Michael D Kinney , Liming Gao , Zhiguang Liu Subject: [staging/LoongArch RESEND PATCH v1 22/33] MdePkg/BaseCacheMaintenanceLib: LoongArch cache maintenance implementation. Date: Wed, 9 Feb 2022 14:55:48 +0800 Message-Id: <20220209065548.2989066-1-lichao@loongson.cn> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-CM-TRANSID: AQAAf9Dx3+J0ZQNi0IsIAA--.26577S2 X-Coremail-Antispam: 1UD129KBjvJXoW3Gr43GF45Cry8AFWDArWrXwb_yoWfKFyrpr Z3GrsrtrW8XrWxCrWvqa18GFn5ua95Ja42y3s8C34Syrn5tF97Ca4jkF1Yg3yjvr48A3Wx Xw42qFsrZFs8ZaDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUvvb7Iv0xC_tr1lb4IE77IF4wAFF20E14v26r1j6r4UM7CY07I2 0VC2zVCF04k26cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rw A2F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xII jxv20xvEc7CjxVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26F4UJVW0owA2z4x0Y4 vEx4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xv F2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_Jw0_WrylYx0Ec7CjxVAajcxG14v26r 1j6r4UMcIj6I8E87Iv67AKxVW8JVWxJwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij 64vIr41lc2xSY4AK6svPMxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI 8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AK xVWUAVWUtwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26ryj6F1UMIIF0xvE2Ix0cI 8IcVCY1x0267AKxVWxJVW8Jr1lIxAIcVCF04k26cxKx2IYs7xG6r1I6r4UMIIF0xvEx4A2 jsIE14v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73Uj IFyTuYvjxUIwZ2UUUUU X-CM-SenderInfo: xolfxt3r6o00pqjv00gofq/1tbiAQAMCF3QvO0LrgANsx Content-Transfer-Encoding: quoted-printable Implement LoongArch cache maintenance functions in BaseCacheMaintenanceLib. Cc: Michael D Kinney Cc: Liming Gao Cc: Zhiguang Liu Signed-off-by: Chao Li --- .../BaseCacheMaintenanceLib.inf | 4 + .../BaseCacheMaintenanceLib/LoongArchCache.c | 253 ++++++++++++++++++ 2 files changed, 257 insertions(+) create mode 100644 MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib= .inf b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf index 33114243d5..e103705b2c 100644 --- a/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf +++ b/MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf @@ -7,6 +7,7 @@ # Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.
= =0D # Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.
= =0D # Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All righ= ts reserved.
=0D +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights= reserved.
=0D #=0D # SPDX-License-Identifier: BSD-2-Clause-Patent=0D #=0D @@ -45,6 +46,9 @@ [Sources.RISCV64]=0D RiscVCache.c=0D =0D +[Sources.LOONGARCH64]=0D + LoongArchCache.c=0D +=0D [Packages]=0D MdePkg/MdePkg.dec=0D =0D diff --git a/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c b/MdeP= kg/Library/BaseCacheMaintenanceLib/LoongArchCache.c new file mode 100644 index 0000000000..4dcba9ecff --- /dev/null +++ b/MdePkg/Library/BaseCacheMaintenanceLib/LoongArchCache.c @@ -0,0 +1,253 @@ +/** @file=0D + Cache Maintenance Functions for LoongArch.=0D + LoongArch cache maintenance functions has not yet been completed, and wi= ll added in later.=0D + Functions are null functions now.=0D +=0D + Copyright (c) 2022, Loongson Technology Corporation Limited. All rights = reserved.
=0D +=0D + SPDX-License-Identifier: BSD-2-Clause-Patent=0D +=0D +**/=0D +=0D +//=0D +// Include common header file for this module.=0D +//=0D +#include =0D +#include =0D +#include =0D +=0D +/**=0D + Invalidates the entire instruction cache in cache coherency domain of th= e=0D + calling CPU.=0D +=0D +**/=0D +VOID=0D +EFIAPI=0D +InvalidateInstructionCache (=0D + VOID=0D + )=0D +{=0D + __asm__ __volatile__(=0D + "ibar 0\n"=0D + :=0D + :=0D + );=0D +}=0D +=0D +/**=0D + Invalidates a range of instruction cache lines in the cache coherency do= main=0D + of the calling CPU.=0D +=0D + Invalidates the instruction cache lines specified by Address and Length.= If=0D + Address is not aligned on a cache line boundary, then entire instruction= =0D + cache line containing Address is invalidated. If Address + Length is not= =0D + aligned on a cache line boundary, then the entire instruction cache line= =0D + containing Address + Length -1 is invalidated. This function may choose = to=0D + invalidate the entire instruction cache if that is more efficient than=0D + invalidating the specified range. If Length is 0, the no instruction cac= he=0D + lines are invalidated. Address is returned.=0D +=0D + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().=0D +=0D + @param Address The base address of the instruction cache lines to=0D + invalidate. If the CPU is in a physical addressing mode,= then=0D + Address is a physical address. If the CPU is in a virtua= l=0D + addressing mode, then Address is a virtual address.=0D +=0D + @param Length The number of bytes to invalidate from the instruction c= ache.=0D +=0D + @return Address.=0D +=0D +**/=0D +VOID *=0D +EFIAPI=0D +InvalidateInstructionCacheRange (=0D + IN VOID *Address,=0D + IN UINTN Length=0D + )=0D +{=0D + __asm__ __volatile__(=0D + "ibar 0\n"=0D + :=0D + :=0D + );=0D + return Address;=0D +}=0D +=0D +/**=0D + Writes Back and Invalidates the entire data cache in cache coherency dom= ain=0D + of the calling CPU.=0D +=0D + Writes Back and Invalidates the entire data cache in cache coherency dom= ain=0D + of the calling CPU. This function guarantees that all dirty cache lines = are=0D + written back to system memory, and also invalidates all the data cache l= ines=0D + in the cache coherency domain of the calling CPU.=0D +=0D +**/=0D +VOID=0D +EFIAPI=0D +WriteBackInvalidateDataCache (=0D + VOID=0D + )=0D +{=0D + DEBUG((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __F= UNCTION__));=0D +}=0D +=0D +/**=0D + Writes Back and Invalidates a range of data cache lines in the cache=0D + coherency domain of the calling CPU.=0D +=0D + Writes Back and Invalidate the data cache lines specified by Address and= =0D + Length. If Address is not aligned on a cache line boundary, then entire = data=0D + cache line containing Address is written back and invalidated. If Addres= s +=0D + Length is not aligned on a cache line boundary, then the entire data cac= he=0D + line containing Address + Length -1 is written back and invalidated. Thi= s=0D + function may choose to write back and invalidate the entire data cache i= f=0D + that is more efficient than writing back and invalidating the specified= =0D + range. If Length is 0, the no data cache lines are written back and=0D + invalidated. Address is returned.=0D +=0D + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().=0D +=0D + @param Address The base address of the data cache lines to write back a= nd=0D + invalidate. If the CPU is in a physical addressing mode,= then=0D + Address is a physical address. If the CPU is in a virtua= l=0D + addressing mode, then Address is a virtual address.=0D + @param Length The number of bytes to write back and invalidate from th= e=0D + data cache.=0D +=0D + @return Address of cache invalidation.=0D +=0D +**/=0D +VOID *=0D +EFIAPI=0D +WriteBackInvalidateDataCacheRange (=0D + IN VOID *Address,=0D + IN UINTN Length=0D + )=0D +{=0D + DEBUG((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __F= UNCTION__));=0D + return Address;=0D +}=0D +=0D +/**=0D + Writes Back the entire data cache in cache coherency domain of the calli= ng=0D + CPU.=0D +=0D + Writes Back the entire data cache in cache coherency domain of the calli= ng=0D + CPU. This function guarantees that all dirty cache lines are written bac= k to=0D + system memory. This function may also invalidate all the data cache line= s in=0D + the cache coherency domain of the calling CPU.=0D +=0D +**/=0D +VOID=0D +EFIAPI=0D +WriteBackDataCache (=0D + VOID=0D + )=0D +{=0D + WriteBackInvalidateDataCache ();=0D +}=0D +=0D +/**=0D + Writes Back a range of data cache lines in the cache coherency domain of= the=0D + calling CPU.=0D +=0D + Writes Back the data cache lines specified by Address and Length. If Add= ress=0D + is not aligned on a cache line boundary, then entire data cache line=0D + containing Address is written back. If Address + Length is not aligned o= n a=0D + cache line boundary, then the entire data cache line containing Address = +=0D + Length -1 is written back. This function may choose to write back the en= tire=0D + data cache if that is more efficient than writing back the specified ran= ge.=0D + If Length is 0, the no data cache lines are written back. This function = may=0D + also invalidate all the data cache lines in the specified range of the c= ache=0D + coherency domain of the calling CPU. Address is returned.=0D +=0D + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().=0D +=0D + @param Address The base address of the data cache lines to write back. = If=0D + the CPU is in a physical addressing mode, then Address i= s a=0D + physical address. If the CPU is in a virtual addressing= =0D + mode, then Address is a virtual address.=0D + @param Length The number of bytes to write back from the data cache.=0D +=0D + @return Address of cache written in main memory.=0D +=0D +**/=0D +VOID *=0D +EFIAPI=0D +WriteBackDataCacheRange (=0D + IN VOID *Address,=0D + IN UINTN Length=0D + )=0D +{=0D + DEBUG((DEBUG_ERROR, "%a: Not currently implemented on LoongArch.\n", __F= UNCTION__));=0D + return Address;=0D +}=0D +=0D +/**=0D + Invalidates the entire data cache in cache coherency domain of the calli= ng=0D + CPU.=0D +=0D + Invalidates the entire data cache in cache coherency domain of the calli= ng=0D + CPU. This function must be used with care because dirty cache lines are = not=0D + written back to system memory. It is typically used for cache diagnostic= s. If=0D + the CPU does not support invalidation of the entire data cache, then a w= rite=0D + back and invalidate operation should be performed on the entire data cac= he.=0D +=0D +**/=0D +VOID=0D +EFIAPI=0D +InvalidateDataCache (=0D + VOID=0D + )=0D +{=0D + __asm__ __volatile__(=0D + "dbar 0\n"=0D + :=0D + :=0D + );=0D +}=0D +=0D +/**=0D + Invalidates a range of data cache lines in the cache coherency domain of= the=0D + calling CPU.=0D +=0D + Invalidates the data cache lines specified by Address and Length. If Add= ress=0D + is not aligned on a cache line boundary, then entire data cache line=0D + containing Address is invalidated. If Address + Length is not aligned on= a=0D + cache line boundary, then the entire data cache line containing Address = +=0D + Length -1 is invalidated. This function must never invalidate any cache = lines=0D + outside the specified range. If Length is 0, the no data cache lines are= =0D + invalidated. Address is returned. This function must be used with care=0D + because dirty cache lines are not written back to system memory. It is=0D + typically used for cache diagnostics. If the CPU does not support=0D + invalidation of a data cache range, then a write back and invalidate=0D + operation should be performed on the data cache range.=0D +=0D + If Length is greater than (MAX_ADDRESS - Address + 1), then ASSERT().=0D +=0D + @param Address The base address of the data cache lines to invalidate. = If=0D + the CPU is in a physical addressing mode, then Address i= s a=0D + physical address. If the CPU is in a virtual addressing = mode,=0D + then Address is a virtual address.=0D + @param Length The number of bytes to invalidate from the data cache.=0D +=0D + @return Address.=0D +=0D +**/=0D +VOID *=0D +EFIAPI=0D +InvalidateDataCacheRange (=0D + IN VOID *Address,=0D + IN UINTN Length=0D + )=0D +{=0D +=0D + __asm__ __volatile__(=0D + "dbar 0\n"=0D + :=0D + :=0D + );=0D + return Address;=0D +}=0D --=20 2.27.0