From: "Abner Chang" <abner.chang@hpe.com>
To: "devel@edk2.groups.io" <devel@edk2.groups.io>,
"lichao@loongson.cn" <lichao@loongson.cn>
Cc: Michael D Kinney <michael.d.kinney@intel.com>,
Liming Gao <gaoliming@byosoft.com.cn>,
Zhiguang Liu <zhiguang.liu@intel.com>,
"Baoqi Zhang" <zhangbaoqi@loongson.cn>
Subject: Re: [edk2-devel] [staging/LoongArch RESEND PATCH v1 21/33] MdePkg/BaseLib: BaseLib for LOONGARCH64 architecture.
Date: Fri, 8 Apr 2022 07:23:37 +0000 [thread overview]
Message-ID: <PH7PR84MB188501F4AAA20AC7F5073EE5FFE99@PH7PR84MB1885.NAMPRD84.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <20220209065542.2986555-1-lichao@loongson.cn>
Acked-by: Abner Chang <abner.chang@hpe.com>
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Chao Li
> Sent: Wednesday, February 9, 2022 2:56 PM
> To: devel@edk2.groups.io
> Cc: Michael D Kinney <michael.d.kinney@intel.com>; Liming Gao
> <gaoliming@byosoft.com.cn>; Zhiguang Liu <zhiguang.liu@intel.com>; Baoqi
> Zhang <zhangbaoqi@loongson.cn>
> Subject: [edk2-devel] [staging/LoongArch RESEND PATCH v1 21/33]
> MdePkg/BaseLib: BaseLib for LOONGARCH64 architecture.
>
> Add LoongArch LOONGARCH64 BaseLib functions.
>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> Cc: Zhiguang Liu <zhiguang.liu@intel.com>
>
> Signed-off-by: Chao Li <lichao@loongson.cn>
> Co-authored-by: Baoqi Zhang <zhangbaoqi@loongson.cn>
> ---
> MdePkg/Include/Library/BaseLib.h | 24 ++
> MdePkg/Library/BaseLib/BaseLib.inf | 13 +
> .../BaseLib/LoongArch64/CpuBreakpoint.S | 24 ++
> MdePkg/Library/BaseLib/LoongArch64/CpuPause.S | 31 +++
> .../BaseLib/LoongArch64/DisableInterrupts.S | 21 ++
> .../BaseLib/LoongArch64/EnableInterrupts.S | 21 ++
> .../BaseLib/LoongArch64/GetInterruptState.S | 35 +++
> .../BaseLib/LoongArch64/InternalSwitchStack.c | 58 +++++
> .../Library/BaseLib/LoongArch64/MemoryFence.S | 19 ++
> .../BaseLib/LoongArch64/SetJumpLongJump.S | 49 ++++
> .../Library/BaseLib/LoongArch64/SwitchStack.S | 39 +++
> .../Library/BaseLib/LoongArch64/Unaligned.c | 244 ++++++++++++++++++
> 12 files changed, 578 insertions(+)
> create mode 100644
> MdePkg/Library/BaseLib/LoongArch64/CpuBreakpoint.S
> create mode 100644 MdePkg/Library/BaseLib/LoongArch64/CpuPause.S
> create mode 100644
> MdePkg/Library/BaseLib/LoongArch64/DisableInterrupts.S
> create mode 100644
> MdePkg/Library/BaseLib/LoongArch64/EnableInterrupts.S
> create mode 100644
> MdePkg/Library/BaseLib/LoongArch64/GetInterruptState.S
> create mode 100644
> MdePkg/Library/BaseLib/LoongArch64/InternalSwitchStack.c
> create mode 100644
> MdePkg/Library/BaseLib/LoongArch64/MemoryFence.S
> create mode 100644
> MdePkg/Library/BaseLib/LoongArch64/SetJumpLongJump.S
> create mode 100644 MdePkg/Library/BaseLib/LoongArch64/SwitchStack.S
> create mode 100644 MdePkg/Library/BaseLib/LoongArch64/Unaligned.c
>
> diff --git a/MdePkg/Include/Library/BaseLib.h
> b/MdePkg/Include/Library/BaseLib.h
> index 6aa0d97218..3c27e2ea93 100644
> --- a/MdePkg/Include/Library/BaseLib.h
> +++ b/MdePkg/Include/Library/BaseLib.h
> @@ -6,6 +6,7 @@ Copyright (c) 2006 - 2021, Intel Corporation. All rights
> reserved.<BR>
> Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.<BR>
> Copyright (c) Microsoft Corporation.<BR>
> Portions Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All
> rights reserved.<BR>
> +Portions Copyright (c) 2022, Loongson Technology Corporation Limited. All
> rights reserved.<BR>
>
> SPDX-License-Identifier: BSD-2-Clause-Patent
>
> @@ -152,6 +153,29 @@ typedef struct {
>
> #endif // defined (MDE_CPU_RISCV64)
>
> +#if defined (MDE_CPU_LOONGARCH64)
> +///
> +/// The LoongArch architecture context buffer used by SetJump() and
> LongJump()
> +///
> +typedef struct {
> + UINT64 S0;
> + UINT64 S1;
> + UINT64 S2;
> + UINT64 S3;
> + UINT64 S4;
> + UINT64 S5;
> + UINT64 S6;
> + UINT64 S7;
> + UINT64 S8;
> + UINT64 SP;
> + UINT64 FP;
> + UINT64 RA;
> +} BASE_LIBRARY_JUMP_BUFFER;
> +
> +#define BASE_LIBRARY_JUMP_BUFFER_ALIGNMENT 8
> +
> +#endif // defined (MDE_CPU_LOONGARCH64)
> +
> //
> // String Services
> //
> diff --git a/MdePkg/Library/BaseLib/BaseLib.inf
> b/MdePkg/Library/BaseLib/BaseLib.inf
> index cebda3b210..4c9b6b50dd 100644
> --- a/MdePkg/Library/BaseLib/BaseLib.inf
> +++ b/MdePkg/Library/BaseLib/BaseLib.inf
> @@ -409,6 +409,19 @@
> RiscV64/RiscVInterrupt.S | GCC
> RiscV64/FlushCache.S | GCC
>
> +[Sources.LOONGARCH64]
> + Math64.c
> + LoongArch64/Unaligned.c
> + LoongArch64/InternalSwitchStack.c
> + LoongArch64/GetInterruptState.S | GCC
> + LoongArch64/EnableInterrupts.S | GCC
> + LoongArch64/DisableInterrupts.S | GCC
> + LoongArch64/MemoryFence.S | GCC
> + LoongArch64/CpuBreakpoint.S | GCC
> + LoongArch64/CpuPause.S | GCC
> + LoongArch64/SetJumpLongJump.S | GCC
> + LoongArch64/SwitchStack.S | GCC
> +
> [Packages]
> MdePkg/MdePkg.dec
>
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/CpuBreakpoint.S
> b/MdePkg/Library/BaseLib/LoongArch64/CpuBreakpoint.S
> new file mode 100644
> index 0000000000..4e022e9bb5
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/CpuBreakpoint.S
> @@ -0,0 +1,24 @@
> +#------------------------------------------------------------------------------
> +#
> +# CpuBreakpoint for LoongArch
> +#
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +#
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +#------------------------------------------------------------------------------
> +
> +ASM_GLOBAL ASM_PFX(CpuBreakpoint)
> +
> +#/**
> +# Generates a breakpoint on the CPU.
> +#
> +# Generates a breakpoint on the CPU. The breakpoint must be
> implemented such
> +# that code can resume normal execution after the breakpoint.
> +#
> +#**/
> +
> +ASM_PFX(CpuBreakpoint):
> + break 3
> + jirl $zero, $ra, 0
> + .end
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/CpuPause.S
> b/MdePkg/Library/BaseLib/LoongArch64/CpuPause.S
> new file mode 100644
> index 0000000000..b98dd48f4d
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/CpuPause.S
> @@ -0,0 +1,31 @@
> +#------------------------------------------------------------------------------
> +#
> +# CpuPause for LoongArch
> +#
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +#
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +#------------------------------------------------------------------------------
> +
> +ASM_GLOBAL ASM_PFX(CpuPause)
> +
> +#/**
> +# Requests CPU to pause for a short period of time.
> +#
> +# Requests CPU to pause for a short period of time. Typically used in MP
> +# systems to prevent memory starvation while waiting for a spin lock.
> +#
> +#**/
> +
> +ASM_PFX(CpuPause):
> + andi $zero, $zero, 0x0 //nop
> + andi $zero, $zero, 0x0 //nop
> + andi $zero, $zero, 0x0 //nop
> + andi $zero, $zero, 0x0 //nop
> + andi $zero, $zero, 0x0 //nop
> + andi $zero, $zero, 0x0 //nop
> + andi $zero, $zero, 0x0 //nop
> + andi $zero, $zero, 0x0 //nop
> + jirl $zero, $ra, 0
> + .end
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/DisableInterrupts.S
> b/MdePkg/Library/BaseLib/LoongArch64/DisableInterrupts.S
> new file mode 100644
> index 0000000000..0f228339af
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/DisableInterrupts.S
> @@ -0,0 +1,21 @@
> +#------------------------------------------------------------------------------
> +#
> +# LoongArch interrupt disable
> +#
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +#
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +#------------------------------------------------------------------------------
> +
> +ASM_GLOBAL ASM_PFX(DisableInterrupts)
> +
> +#/**
> +# Disables CPU interrupts.
> +#**/
> +
> +ASM_PFX(DisableInterrupts):
> + li.w $t0, 0x4
> + csrxchg $zero, $t0, 0x0
> + jirl $zero, $ra, 0
> + .end
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/EnableInterrupts.S
> b/MdePkg/Library/BaseLib/LoongArch64/EnableInterrupts.S
> new file mode 100644
> index 0000000000..3c34fb2cdd
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/EnableInterrupts.S
> @@ -0,0 +1,21 @@
> +#------------------------------------------------------------------------------
> +#
> +# LoongArch interrupt enable
> +#
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +#
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +#------------------------------------------------------------------------------
> +
> +ASM_GLOBAL ASM_PFX(EnableInterrupts)
> +
> +#/**
> +# Enables CPU interrupts.
> +#**/
> +
> +ASM_PFX(EnableInterrupts):
> + li.w $t0, 0x4
> + csrxchg $t0, $t0, 0x0
> + jirl $zero, $ra, 0
> + .end
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/GetInterruptState.S
> b/MdePkg/Library/BaseLib/LoongArch64/GetInterruptState.S
> new file mode 100644
> index 0000000000..bfd1f2d5f7
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/GetInterruptState.S
> @@ -0,0 +1,35 @@
> +#------------------------------------------------------------------------------
> +#
> +# Get LoongArch interrupt status
> +#
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +#
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +#------------------------------------------------------------------------------
> +
> +ASM_GLOBAL ASM_PFX(GetInterruptState)
> +
> +#/**
> +# Retrieves the current CPU interrupt state.
> +#
> +# Returns TRUE means interrupts are currently enabled. Otherwise,
> +# returns FALSE.
> +#
> +# @retval TRUE CPU interrupts are enabled.
> +# @retval FALSE CPU interrupts are disabled.
> +#
> +#**/
> +
> +ASM_PFX(GetInterruptState):
> + li.w $t1, 0x4
> + csrrd $t0, 0x0
> + and $t0, $t0, $t1
> + beqz $t0, 1f
> + li.w $a0, 0x1
> + b 2f
> +1:
> + li.w $a0, 0x0
> +2:
> + jirl $zero, $ra, 0
> + .end
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/InternalSwitchStack.c
> b/MdePkg/Library/BaseLib/LoongArch64/InternalSwitchStack.c
> new file mode 100644
> index 0000000000..1f1e43106f
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/InternalSwitchStack.c
> @@ -0,0 +1,58 @@
> +/** @file
> + SwitchStack() function for LoongArch.
> +
> + Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +
> + SPDX-License-Identifier: BSD-2-Clause-Patent
> +**/
> +
> +#include "BaseLibInternals.h"
> +
> +UINTN
> +EFIAPI
> +InternalSwitchStackAsm (
> + IN BASE_LIBRARY_JUMP_BUFFER *JumpBuffer
> + );
> +
> +/**
> + Transfers control to a function starting with a new stack.
> +
> + Transfers control to the function specified by EntryPoint using the
> + new stack specified by NewStack and passing in the parameters specified
> + by Context1 and Context2. Context1 and Context2 are optional and may
> + be NULL. The function EntryPoint must never return.
> +
> + If EntryPoint is NULL, then ASSERT().
> + If NewStack is NULL, then ASSERT().
> +
> + @param EntryPoint A pointer to function to call with the new stack.
> + @param Context1 A pointer to the context to pass into the EntryPoint
> + function.
> + @param Context2 A pointer to the context to pass into the EntryPoint
> + function.
> + @param NewStack A pointer to the new stack to use for the EntryPoint
> + function.
> + @param Marker VA_LIST marker for the variable argument list.
> +
> +**/
> +VOID
> +EFIAPI
> +InternalSwitchStack (
> + IN SWITCH_STACK_ENTRY_POINT EntryPoint,
> + IN VOID *Context1, OPTIONAL
> + IN VOID *Context2, OPTIONAL
> + IN VOID *NewStack,
> + IN VA_LIST Marker
> + )
> +
> +{
> + BASE_LIBRARY_JUMP_BUFFER JumpBuffer;
> +
> + JumpBuffer.RA = (UINTN)EntryPoint;
> + JumpBuffer.SP = (UINTN)NewStack - sizeof (VOID*);
> + JumpBuffer.SP -= sizeof (Context1) + sizeof (Context2);
> + ((VOID **)(UINTN)JumpBuffer.SP)[0] = Context1;
> + ((VOID **)(UINTN)JumpBuffer.SP)[1] = Context2;
> +
> + InternalSwitchStackAsm(&JumpBuffer);
> +}
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/MemoryFence.S
> b/MdePkg/Library/BaseLib/LoongArch64/MemoryFence.S
> new file mode 100644
> index 0000000000..0d8dc10914
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/MemoryFence.S
> @@ -0,0 +1,19 @@
> +#------------------------------------------------------------------------------
> +#
> +# MemoryFence() for LoongArch64
> +#
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +#
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +#------------------------------------------------------------------------------
> +
> +ASM_GLOBAL ASM_PFX(MemoryFence)
> +
> +#
> +# Memory fence for LoongArch64
> +#
> +ASM_PFX(MemoryFence):
> + dbar 0
> + jirl $zero, $ra, 0
> + .end
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/SetJumpLongJump.S
> b/MdePkg/Library/BaseLib/LoongArch64/SetJumpLongJump.S
> new file mode 100644
> index 0000000000..35267c925f
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/SetJumpLongJump.S
> @@ -0,0 +1,49 @@
> +#------------------------------------------------------------------------------
> +#
> +# Set/Long jump for LoongArch
> +#
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +#
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +#------------------------------------------------------------------------------
> +
> +#define STORE st.d /* 64 bit mode regsave instruction */
> +#define LOAD ld.d /* 64 bit mode regload instruction */
> +#define RSIZE 8 /* 64 bit mode register size */
> +
> +ASM_GLOBAL ASM_PFX(SetJump)
> +ASM_GLOBAL ASM_PFX(InternalLongJump)
> +
> +ASM_PFX(SetJump):
> + STORE $s0, $a0, RSIZE * 0
> + STORE $s1, $a0, RSIZE * 1
> + STORE $s2, $a0, RSIZE * 2
> + STORE $s3, $a0, RSIZE * 3
> + STORE $s4, $a0, RSIZE * 4
> + STORE $s5, $a0, RSIZE * 5
> + STORE $s6, $a0, RSIZE * 6
> + STORE $s7, $a0, RSIZE * 7
> + STORE $s8, $a0, RSIZE * 8
> + STORE $sp, $a0, RSIZE * 9
> + STORE $fp, $a0, RSIZE * 10
> + STORE $ra, $a0, RSIZE * 11
> + li.w $a0, 0 # Setjmp return
> + jirl $zero, $ra, 0
> +
> +ASM_PFX(InternalLongJump):
> + LOAD $ra, $a0, RSIZE * 11
> + LOAD $s0, $a0, RSIZE * 0
> + LOAD $s1, $a0, RSIZE * 1
> + LOAD $s2, $a0, RSIZE * 2
> + LOAD $s3, $a0, RSIZE * 3
> + LOAD $s4, $a0, RSIZE * 4
> + LOAD $s5, $a0, RSIZE * 5
> + LOAD $s6, $a0, RSIZE * 6
> + LOAD $s7, $a0, RSIZE * 7
> + LOAD $s8, $a0, RSIZE * 8
> + LOAD $sp, $a0, RSIZE * 9
> + LOAD $fp, $a0, RSIZE * 10
> + move $a0, $a1
> + jirl $zero, $ra, 0
> + .end
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/SwitchStack.S
> b/MdePkg/Library/BaseLib/LoongArch64/SwitchStack.S
> new file mode 100644
> index 0000000000..4facc76082
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/SwitchStack.S
> @@ -0,0 +1,39 @@
> +#------------------------------------------------------------------------------
> +#
> +# InternalSwitchStackAsm for LoongArch
> +#
> +# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +#
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +#------------------------------------------------------------------------------
> +
> +#define STORE st.d /* 64 bit mode regsave instruction */
> +#define LOAD ld.d /* 64 bit mode regload instruction */
> +#define RSIZE 8 /* 64 bit mode register size */
> +
> +ASM_GLOBAL ASM_PFX(InternalSwitchStackAsm)
> +
> +/**
> + This allows the caller to switch the stack and goes to the new entry point
> +
> + @param JumpBuffer A pointer to CPU context buffer.
> +**/
> +
> +ASM_PFX(InternalSwitchStackAsm):
> + LOAD $ra, $a0, RSIZE * 11
> + LOAD $s0, $a0, RSIZE * 0
> + LOAD $s1, $a0, RSIZE * 1
> + LOAD $s2, $a0, RSIZE * 2
> + LOAD $s3, $a0, RSIZE * 3
> + LOAD $s4, $a0, RSIZE * 4
> + LOAD $s5, $a0, RSIZE * 5
> + LOAD $s6, $a0, RSIZE * 6
> + LOAD $s7, $a0, RSIZE * 7
> + LOAD $s8, $a0, RSIZE * 8
> + LOAD $sp, $a0, RSIZE * 9
> + LOAD $fp, $a0, RSIZE * 10
> + LOAD $a0, $sp, 0
> + LOAD $a1, $sp, 8
> + jirl $zero, $ra, 0
> + .end
> diff --git a/MdePkg/Library/BaseLib/LoongArch64/Unaligned.c
> b/MdePkg/Library/BaseLib/LoongArch64/Unaligned.c
> new file mode 100644
> index 0000000000..33fa3d2eed
> --- /dev/null
> +++ b/MdePkg/Library/BaseLib/LoongArch64/Unaligned.c
> @@ -0,0 +1,244 @@
> +/** @file
> + Unaligned access functions of BaseLib for LoongArch.
> +
> + Copyright (c) 2022, Loongson Technology Corporation Limited. All rights
> reserved.<BR>
> +
> + SPDX-License-Identifier: BSD-2-Clause-Patent
> +
> +**/
> +
> +#include "BaseLibInternals.h"
> +
> +/**
> + Reads a 16-bit value from memory that may be unaligned.
> +
> + This function returns the 16-bit value pointed to by Buffer. The function
> + guarantees that the read operation does not produce an alignment fault.
> +
> + If the Buffer is NULL, then ASSERT().
> +
> + @param Buffer The pointer to a 16-bit value that may be unaligned.
> +
> + @return The 16-bit value read from Buffer.
> +
> +**/
> +UINT16
> +EFIAPI
> +ReadUnaligned16 (
> + IN CONST UINT16 *Buffer
> + )
> +{
> + volatile UINT8 LowerByte;
> + volatile UINT8 HigherByte;
> +
> + ASSERT (Buffer != NULL);
> +
> + LowerByte = ((UINT8*)Buffer)[0];
> + HigherByte = ((UINT8*)Buffer)[1];
> +
> + return (UINT16)(LowerByte | (HigherByte << 8));
> +}
> +
> +/**
> + Writes a 16-bit value to memory that may be unaligned.
> +
> + This function writes the 16-bit value specified by Value to Buffer. Value is
> + returned. The function guarantees that the write operation does not
> produce
> + an alignment fault.
> +
> + If the Buffer is NULL, then ASSERT().
> +
> + @param Buffer The pointer to a 16-bit value that may be unaligned.
> + @param Value 16-bit value to write to Buffer.
> +
> + @return The 16-bit value to write to Buffer.
> +
> +**/
> +UINT16
> +EFIAPI
> +WriteUnaligned16 (
> + OUT UINT16 *Buffer,
> + IN UINT16 Value
> + )
> +{
> + ASSERT (Buffer != NULL);
> +
> + ((volatile UINT8*)Buffer)[0] = (UINT8)Value;
> + ((volatile UINT8*)Buffer)[1] = (UINT8)(Value >> 8);
> +
> + return Value;
> +}
> +
> +/**
> + Reads a 24-bit value from memory that may be unaligned.
> +
> + This function returns the 24-bit value pointed to by Buffer. The function
> + guarantees that the read operation does not produce an alignment fault.
> +
> + If the Buffer is NULL, then ASSERT().
> +
> + @param Buffer The pointer to a 24-bit value that may be unaligned.
> +
> + @return The 24-bit value read from Buffer.
> +
> +**/
> +UINT32
> +EFIAPI
> +ReadUnaligned24 (
> + IN CONST UINT32 *Buffer
> + )
> +{
> + ASSERT (Buffer != NULL);
> +
> + return (UINT32)(
> + ReadUnaligned16 ((UINT16*)Buffer) |
> + (((UINT8*)Buffer)[2] << 16)
> + );
> +}
> +
> +/**
> + Writes a 24-bit value to memory that may be unaligned.
> +
> + This function writes the 24-bit value specified by Value to Buffer. Value is
> + returned. The function guarantees that the write operation does not
> produce
> + an alignment fault.
> +
> + If the Buffer is NULL, then ASSERT().
> +
> + @param Buffer The pointer to a 24-bit value that may be unaligned.
> + @param Value 24-bit value to write to Buffer.
> +
> + @return The 24-bit value to write to Buffer.
> +
> +**/
> +UINT32
> +EFIAPI
> +WriteUnaligned24 (
> + OUT UINT32 *Buffer,
> + IN UINT32 Value
> + )
> +{
> + ASSERT (Buffer != NULL);
> +
> + WriteUnaligned16 ((UINT16*)Buffer, (UINT16)Value);
> + *(UINT8*)((UINT16*)Buffer + 1) = (UINT8)(Value >> 16);
> + return Value;
> +}
> +
> +/**
> + Reads a 32-bit value from memory that may be unaligned.
> +
> + This function returns the 32-bit value pointed to by Buffer. The function
> + guarantees that the read operation does not produce an alignment fault.
> +
> + If the Buffer is NULL, then ASSERT().
> +
> + @param Buffer The pointer to a 32-bit value that may be unaligned.
> +
> + @return The 32-bit value read from Buffer.
> +
> +**/
> +UINT32
> +EFIAPI
> +ReadUnaligned32 (
> + IN CONST UINT32 *Buffer
> + )
> +{
> + UINT16 LowerBytes;
> + UINT16 HigherBytes;
> +
> + ASSERT (Buffer != NULL);
> +
> + LowerBytes = ReadUnaligned16 ((UINT16*) Buffer);
> + HigherBytes = ReadUnaligned16 ((UINT16*) Buffer + 1);
> +
> + return (UINT32) (LowerBytes | (HigherBytes << 16));
> +}
> +
> +/**
> + Writes a 32-bit value to memory that may be unaligned.
> +
> + This function writes the 32-bit value specified by Value to Buffer. Value is
> + returned. The function guarantees that the write operation does not
> produce
> + an alignment fault.
> +
> + If the Buffer is NULL, then ASSERT().
> +
> + @param Buffer The pointer to a 32-bit value that may be unaligned.
> + @param Value 32-bit value to write to Buffer.
> +
> + @return The 32-bit value to write to Buffer.
> +
> +**/
> +UINT32
> +EFIAPI
> +WriteUnaligned32 (
> + OUT UINT32 *Buffer,
> + IN UINT32 Value
> + )
> +{
> + ASSERT (Buffer != NULL);
> +
> + WriteUnaligned16 ((UINT16*)Buffer, (UINT16)Value);
> + WriteUnaligned16 ((UINT16*)Buffer + 1, (UINT16)(Value >> 16));
> + return Value;
> +}
> +
> +/**
> + Reads a 64-bit value from memory that may be unaligned.
> +
> + This function returns the 64-bit value pointed to by Buffer. The function
> + guarantees that the read operation does not produce an alignment fault.
> +
> + If the Buffer is NULL, then ASSERT().
> +
> + @param Buffer The pointer to a 64-bit value that may be unaligned.
> +
> + @return The 64-bit value read from Buffer.
> +
> +**/
> +UINT64
> +EFIAPI
> +ReadUnaligned64 (
> + IN CONST UINT64 *Buffer
> + )
> +{
> + UINT32 LowerBytes;
> + UINT32 HigherBytes;
> +
> + ASSERT (Buffer != NULL);
> +
> + LowerBytes = ReadUnaligned32 ((UINT32*) Buffer);
> + HigherBytes = ReadUnaligned32 ((UINT32*) Buffer + 1);
> +
> + return (UINT64) (LowerBytes | LShiftU64 (HigherBytes, 32));
> +}
> +
> +/**
> + Writes a 64-bit value to memory that may be unaligned.
> +
> + This function writes the 64-bit value specified by Value to Buffer. Value is
> + returned. The function guarantees that the write operation does not
> produce
> + an alignment fault.
> +
> + If the Buffer is NULL, then ASSERT().
> +
> + @param Buffer The pointer to a 64-bit value that may be unaligned.
> + @param Value 64-bit value to write to Buffer.
> +
> + @return The 64-bit value to write to Buffer.
> +
> +**/
> +UINT64
> +EFIAPI
> +WriteUnaligned64 (
> + OUT UINT64 *Buffer,
> + IN UINT64 Value
> + )
> +{
> + ASSERT (Buffer != NULL);
> +
> + WriteUnaligned32 ((UINT32*)Buffer, (UINT32)Value);
> + WriteUnaligned32 ((UINT32*)Buffer + 1, (UINT32)RShiftU64 (Value, 32));
> + return Value;
> +}
> --
> 2.27.0
>
>
>
>
>
prev parent reply other threads:[~2022-04-08 7:24 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-09 6:55 [staging/LoongArch RESEND PATCH v1 21/33] MdePkg/BaseLib: BaseLib for LOONGARCH64 architecture Chao Li
2022-04-08 7:23 ` Abner Chang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=PH7PR84MB188501F4AAA20AC7F5073EE5FFE99@PH7PR84MB1885.NAMPRD84.PROD.OUTLOOK.COM \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox