From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: mx.groups.io; dkim=pass header.i=@linaro.org header.s=google header.b=ZGN4TZVa; spf=pass (domain: linaro.org, ip: 209.85.221.50, mailfrom: leif.lindholm@linaro.org) Received: from mail-wr1-f50.google.com (mail-wr1-f50.google.com [209.85.221.50]) by groups.io with SMTP; Thu, 05 Sep 2019 09:12:03 -0700 Received: by mail-wr1-f50.google.com with SMTP id t16so3468676wra.6 for ; Thu, 05 Sep 2019 09:12:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=BmYz7o4aRwrxzj8uciOpel4M3w21zOSYVwNWposGv1g=; b=ZGN4TZVawatAU8ggckqd7nfuFNgOVMqixuQS6QI69A1q6xqi5XXhmWtsLZjR+wFNX8 Wf/AVNm5glj63YyTInec4CzejxfuhHkKqBUsqH0rOplpUcrOu4rzJ47+CbgHP1OXp5Nu KTdizOWXIaWfvd/j5sxrI2cOz8XwTlTdamYrlFhPQU7nyaztb8FjUTA0pOtHOOP+5/Em d/mWpEoeLWc9NXbbfFhV+nBPC6mx8lbySgMwge9cRQ+iJlAU6yzI+8A8M7ppeaIoyoiT jzsE4MC4XuDCPUbrxDs24xKvXu1P4umYEux2+WSF4OB7q4HiWSBY7IzFk4XnF+DDlWyi uQ0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=BmYz7o4aRwrxzj8uciOpel4M3w21zOSYVwNWposGv1g=; b=IurA9WASBWyylhJimlJ+pFckRoVbvXwN8fA9A4NT5R1id3+SoVkeMaySCACNTr9Zm4 0tuxAsIvC15bbITHbqtIfAAq1t/mbFjR9SgDWpESk6jZ2A/fTeWo7CZwjYuNr2I/8zeC blJIs3/+JLVCLFsxxJoK0G6bR4ANBRt2K7CvO7qj5ERiWsg/gKCZabM/y2LMolIn/wPC X8Ruuoq+gP87OaYe76o4S068BzicKHryppaDIC+IG7cVheOmVMp8XB2Q1t6+FDS0CzJy HVK+jHekyF2oQrVmxyVmk2zO6HnMyMQG7p+JkZ8m36Ryy6A4KQnNnda43eVUp40IQNPn gAyg== X-Gm-Message-State: APjAAAWuHJWlhr+9wOT0Az8hHknIfPPc8Tb+ukS0KHAbdcb4ySLvE+r5 mD6286ZFqdltZWU4m118uAIvRe7XjuE= X-Google-Smtp-Source: APXvYqxxCxNq3JOUdkuB2WhP++lHlOa21bfA5KT7UowdtdKRsNbh1dnX2jSXRZduz6lJAt6FN7wXow== X-Received: by 2002:adf:dc81:: with SMTP id r1mr3298187wrj.52.1567699920804; Thu, 05 Sep 2019 09:12:00 -0700 (PDT) Return-Path: Received: from bivouac.eciton.net (bivouac.eciton.net. [2a00:1098:0:86:1000:23:0:2]) by smtp.gmail.com with ESMTPSA id i73sm4168882wmg.33.2019.09.05.09.11.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Sep 2019 09:11:59 -0700 (PDT) Date: Thu, 5 Sep 2019 17:11:58 +0100 From: "Leif Lindholm" To: devel@edk2.groups.io, abner.chang@hpe.com Subject: Re: [edk2-devel] [edk2-staging/RISC-V-V2 PATCH v1 12/22]: MdePkg/BaseLib: BaseLib for RISC-V RV64 Processor. Message-ID: <20190905161158.GE29255@bivouac.eciton.net> References: <1567593797-26216-1-git-send-email-abner.chang@hpe.com> <1567593797-26216-13-git-send-email-abner.chang@hpe.com> MIME-Version: 1.0 In-Reply-To: <1567593797-26216-13-git-send-email-abner.chang@hpe.com> User-Agent: Mutt/1.10.1 (2018-07-13) Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Wed, Sep 04, 2019 at 06:43:07PM +0800, Abner Chang wrote: > Add RISC-V processor binding and RISC-V processor specific definitions and macros. > > Contributed-under: TianoCore Contribution Agreement 1.0 > Signed-off-by: Abner Chang > --- > MdePkg/Library/BaseLib/BaseLib.inf | 18 +- > MdePkg/Library/BaseLib/RiscV64/CpuBreakpoint.c | 33 ++ > MdePkg/Library/BaseLib/RiscV64/CpuPause.c | 35 ++ > MdePkg/Library/BaseLib/RiscV64/DisableInterrupts.c | 33 ++ > MdePkg/Library/BaseLib/RiscV64/EnableInterrupts.c | 33 ++ > MdePkg/Library/BaseLib/RiscV64/FlushCache.S | 28 + > MdePkg/Library/BaseLib/RiscV64/GetInterruptState.c | 43 ++ > .../Library/BaseLib/RiscV64/InternalSwitchStack.c | 61 +++ > MdePkg/Library/BaseLib/RiscV64/LongJump.c | 38 ++ > .../Library/BaseLib/RiscV64/RiscVCpuBreakpoint.S | 20 + > MdePkg/Library/BaseLib/RiscV64/RiscVCpuPause.S | 20 + > MdePkg/Library/BaseLib/RiscV64/RiscVInterrupt.S | 33 ++ > .../Library/BaseLib/RiscV64/RiscVSetJumpLongJump.S | 61 +++ > MdePkg/Library/BaseLib/RiscV64/Unaligned.c | 270 ++++++++++ > MdePkg/Library/BaseLib/RiscV64/riscv_asm.h | 194 +++++++ > MdePkg/Library/BaseLib/RiscV64/riscv_encoding.h | 574 +++++++++++++++++++++ > MdePkg/Library/BaseLib/RiscV64/sbi_const.h | 53 ++ > 17 files changed, 1546 insertions(+), 1 deletion(-) > create mode 100644 MdePkg/Library/BaseLib/RiscV64/CpuBreakpoint.c > create mode 100644 MdePkg/Library/BaseLib/RiscV64/CpuPause.c > create mode 100644 MdePkg/Library/BaseLib/RiscV64/DisableInterrupts.c > create mode 100644 MdePkg/Library/BaseLib/RiscV64/EnableInterrupts.c > create mode 100644 MdePkg/Library/BaseLib/RiscV64/FlushCache.S > create mode 100644 MdePkg/Library/BaseLib/RiscV64/GetInterruptState.c > create mode 100644 MdePkg/Library/BaseLib/RiscV64/InternalSwitchStack.c > create mode 100644 MdePkg/Library/BaseLib/RiscV64/LongJump.c > create mode 100644 MdePkg/Library/BaseLib/RiscV64/RiscVCpuBreakpoint.S > create mode 100644 MdePkg/Library/BaseLib/RiscV64/RiscVCpuPause.S > create mode 100644 MdePkg/Library/BaseLib/RiscV64/RiscVInterrupt.S > create mode 100644 MdePkg/Library/BaseLib/RiscV64/RiscVSetJumpLongJump.S > create mode 100644 MdePkg/Library/BaseLib/RiscV64/Unaligned.c > create mode 100644 MdePkg/Library/BaseLib/RiscV64/riscv_asm.h > create mode 100644 MdePkg/Library/BaseLib/RiscV64/riscv_encoding.h > create mode 100644 MdePkg/Library/BaseLib/RiscV64/sbi_const.h > > diff --git a/MdePkg/Library/BaseLib/BaseLib.inf b/MdePkg/Library/BaseLib/BaseLib.inf > index 3586beb..28d5795 100644 > --- a/MdePkg/Library/BaseLib/BaseLib.inf > +++ b/MdePkg/Library/BaseLib/BaseLib.inf > @@ -4,6 +4,7 @@ > # Copyright (c) 2007 - 2019, Intel Corporation. All rights reserved.
> # Portions copyright (c) 2008 - 2009, Apple Inc. All rights reserved.
> # Portions copyright (c) 2011 - 2013, ARM Ltd. All rights reserved.
> +# Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> # > # SPDX-License-Identifier: BSD-2-Clause-Patent > # > @@ -20,7 +21,7 @@ > LIBRARY_CLASS = BaseLib > > # > -# VALID_ARCHITECTURES = IA32 X64 EBC ARM AARCH64 > +# VALID_ARCHITECTURES = IA32 X64 EBC ARM AARCH64 RISCV64 > # > > [Sources] > @@ -381,6 +382,21 @@ Ah, right. I just noticed these patches don't follow the patch generation guidelines from https://github.com/tianocore/tianocore.github.io/wiki/Laszlo%27s-unkempt-git-guide-for-edk2-contributors-and-maintainers If you run BaseTools/Scripts/SetupGit.py, it will set up most of the important defaults for your clone (needs to be done once per repository). This greatly improves the reviewability of patches. / Leif > AArch64/CpuBreakpoint.asm | MSFT > AArch64/SpeculationBarrier.asm | MSFT > > +[Sources.RISCV64] > + Math64.c > + RiscV64/Unaligned.c > + RiscV64/InternalSwitchStack.c > + RiscV64/CpuBreakpoint.c > + RiscV64/GetInterruptState.c > + RiscV64/DisableInterrupts.c > + RiscV64/EnableInterrupts.c > + RiscV64/CpuPause.c > + RiscV64/RiscVSetJumpLongJump.S | GCC > + RiscV64/RiscVCpuBreakpoint.S | GCC > + RiscV64/RiscVCpuPause.S | GCC > + RiscV64/RiscVInterrupt.S | GCC > + RiscV64/FlushCache.S | GCC > + > [Packages] > MdePkg/MdePkg.dec > > diff --git a/MdePkg/Library/BaseLib/RiscV64/CpuBreakpoint.c b/MdePkg/Library/BaseLib/RiscV64/CpuBreakpoint.c > new file mode 100644 > index 0000000..763b813 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/CpuBreakpoint.c > @@ -0,0 +1,33 @@ > +/** @file > + CPU breakpoint for RISC-V > + > + Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +**/ > + > +#include "BaseLibInternals.h" > + > +extern VOID RiscVCpuBreakpoint (VOID); > + > +/** > + Generates a breakpoint on the CPU. > + > + Generates a breakpoint on the CPU. The breakpoint must be implemented such > + that code can resume normal execution after the breakpoint. > + > +**/ > +VOID > +EFIAPI > +CpuBreakpoint ( > + VOID > + ) > +{ > + RiscVCpuBreakpoint (); > +} > diff --git a/MdePkg/Library/BaseLib/RiscV64/CpuPause.c b/MdePkg/Library/BaseLib/RiscV64/CpuPause.c > new file mode 100644 > index 0000000..3094aac > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/CpuPause.c > @@ -0,0 +1,35 @@ > +/** @file > + CPU pause for RISC-V > + > + Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +**/ > + > +#include "BaseLibInternals.h" > + > +extern VOID RiscVCpuPause (VOID); > + > + > +/** > + Requests CPU to pause for a short period of time. > + > + Requests CPU to pause for a short period of time. Typically used in MP > + systems to prevent memory starvation while waiting for a spin lock. > + > +**/ > +VOID > +EFIAPI > +CpuPause ( > + VOID > + ) > +{ > + RiscVCpuPause (); > +} > + > diff --git a/MdePkg/Library/BaseLib/RiscV64/DisableInterrupts.c b/MdePkg/Library/BaseLib/RiscV64/DisableInterrupts.c > new file mode 100644 > index 0000000..6f7e88c > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/DisableInterrupts.c > @@ -0,0 +1,33 @@ > +/** @file > + CPU disable interrupt function for RISC-V > + > + Copyright (c) 2016 - 2019, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +**/ > +#include "BaseLibInternals.h" > +#include "riscv_asm.h" > +#include "riscv_encoding.h" > + > + > +extern VOID RiscVDisableInterrupts (VOID); > + > +/** > + Disables CPU interrupts. > + > +**/ > +VOID > +EFIAPI > +DisableInterrupts ( > + VOID > + ) > +{ > + csr_clear(CSR_SSTATUS, MSTATUS_SIE); //SIE > +} > + > diff --git a/MdePkg/Library/BaseLib/RiscV64/EnableInterrupts.c b/MdePkg/Library/BaseLib/RiscV64/EnableInterrupts.c > new file mode 100644 > index 0000000..a0ce150 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/EnableInterrupts.c > @@ -0,0 +1,33 @@ > +/** @file > + CPU enable interrupt function for RISC-V > + > + Copyright (c) 2016-2019, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +**/ > + > +#include "BaseLibInternals.h" > +#include "riscv_asm.h" > +#include "riscv_encoding.h" > + > +extern VOID RiscVEnableInterrupt (VOID); > + > +/** > + Enables CPU interrupts. > + > +**/ > +VOID > +EFIAPI > +EnableInterrupts ( > + VOID > + ) > +{ > + csr_set(CSR_SSTATUS, MSTATUS_SIE); //SIE > +} > + > diff --git a/MdePkg/Library/BaseLib/RiscV64/FlushCache.S b/MdePkg/Library/BaseLib/RiscV64/FlushCache.S > new file mode 100644 > index 0000000..75ddc46 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/FlushCache.S > @@ -0,0 +1,28 @@ > +//------------------------------------------------------------------------------ > +// > +// RISC-V cache operation. > +// > +// Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> +// > +// This program and the accompanying materials > +// are licensed and made available under the terms and conditions of the BSD License > +// which accompanies this distribution. The full text of the license may be found at > +// http://opensource.org/licenses/bsd-license.php. > +// > +// THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > +// WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +// > +//------------------------------------------------------------------------------ > + > +.align 3 > +ASM_GLOBAL ASM_PFX(RiscVInvdInstCacheAsm) > +ASM_GLOBAL ASM_PFX(RiscVInvdDataCacheAsm) > + > + > +ASM_PFX(RiscVInvdInstCacheAsm): > + //fence.i > + ret > + > +ASM_PFX(RiscVInvdDataCacheAsm): > + //fence > + ret > diff --git a/MdePkg/Library/BaseLib/RiscV64/GetInterruptState.c b/MdePkg/Library/BaseLib/RiscV64/GetInterruptState.c > new file mode 100644 > index 0000000..b12450f > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/GetInterruptState.c > @@ -0,0 +1,43 @@ > +/** @file > + CPU get interrupt state function for RISC-V > + > + Copyright (c) 2016 - 2019, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +**/ > + > +#include "BaseLibInternals.h" > +#include "riscv_asm.h" > +#include "riscv_encoding.h" > + > +extern UINT32 RiscVGetInterrupts (VOID); > + > +/** > + Retrieves the current CPU interrupt state. > + > + Returns TRUE is interrupts are currently enabled. Otherwise > + returns FALSE. > + > + @retval TRUE CPU interrupts are enabled. > + @retval FALSE CPU interrupts are disabled. > + > +**/ > +BOOLEAN > +EFIAPI > +GetInterruptState ( > + VOID > + ) > +{ > + unsigned long RetValue; > + > + RetValue = csr_read(CSR_SSTATUS); > + return (RetValue & MSTATUS_SIE)? TRUE: FALSE; > +} > + > + > diff --git a/MdePkg/Library/BaseLib/RiscV64/InternalSwitchStack.c b/MdePkg/Library/BaseLib/RiscV64/InternalSwitchStack.c > new file mode 100644 > index 0000000..7d748a1 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/InternalSwitchStack.c > @@ -0,0 +1,61 @@ > +/** @file > + Switch stack function for RISC-V > + > + Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +**/ > + > +#include "BaseLibInternals.h" > + > +/** > + Transfers control to a function starting with a new stack. > + > + Transfers control to the function specified by EntryPoint using the > + new stack specified by NewStack and passing in the parameters specified > + by Context1 and Context2. Context1 and Context2 are optional and may > + be NULL. The function EntryPoint must never return. > + Marker will be ignored on IA-32, x64, and EBC. > + IPF CPUs expect one additional parameter of type VOID * that specifies > + the new backing store pointer. > + > + If EntryPoint is NULL, then ASSERT(). > + If NewStack is NULL, then ASSERT(). > + > + @param EntryPoint A pointer to function to call with the new stack. > + @param Context1 A pointer to the context to pass into the EntryPoint > + function. > + @param Context2 A pointer to the context to pass into the EntryPoint > + function. > + @param NewStack A pointer to the new stack to use for the EntryPoint > + function. > + @param Marker VA_LIST marker for the variable argument list. > + > +**/ > +VOID > +EFIAPI > +InternalSwitchStack ( > + IN SWITCH_STACK_ENTRY_POINT EntryPoint, > + IN VOID *Context1, OPTIONAL > + IN VOID *Context2, OPTIONAL > + IN VOID *NewStack, > + IN VA_LIST Marker > + ) > +{ > + BASE_LIBRARY_JUMP_BUFFER JumpBuffer; > + > + DEBUG ((EFI_D_INFO, "RISC-V InternalSwitchStack Entry:%x Context1:%x Context2:%x NewStack%x\n", \ > + EntryPoint, Context1, Context2, NewStack)); > + JumpBuffer.RA = (UINTN)EntryPoint; > + JumpBuffer.SP = (UINTN)NewStack - sizeof (VOID *); > + JumpBuffer.S0 = (UINT64)(UINTN)Context1; > + JumpBuffer.S1 = (UINT64)(UINTN)Context2; > + LongJump (&JumpBuffer, (UINTN)-1); > + ASSERT(FALSE); > +} > diff --git a/MdePkg/Library/BaseLib/RiscV64/LongJump.c b/MdePkg/Library/BaseLib/RiscV64/LongJump.c > new file mode 100644 > index 0000000..bd081f2 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/LongJump.c > @@ -0,0 +1,38 @@ > +/** @file > + Long jump implementation of RISC-V > + > + Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +**/ > + > +#include "BaseLibInternals.h" > + > + > +/** > + Restores the CPU context that was saved with SetJump(). > + > + Restores the CPU context from the buffer specified by JumpBuffer. > + This function never returns to the caller. > + Instead is resumes execution based on the state of JumpBuffer. > + > + @param JumpBuffer A pointer to CPU context buffer. > + @param Value The value to return when the SetJump() context is restored. > + > +**/ > +VOID > +EFIAPI > +InternalLongJump ( > + IN BASE_LIBRARY_JUMP_BUFFER *JumpBuffer, > + IN UINTN Value > + ) > +{ > + ASSERT (FALSE); > +} > + > diff --git a/MdePkg/Library/BaseLib/RiscV64/RiscVCpuBreakpoint.S b/MdePkg/Library/BaseLib/RiscV64/RiscVCpuBreakpoint.S > new file mode 100644 > index 0000000..3c38e4d > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/RiscVCpuBreakpoint.S > @@ -0,0 +1,20 @@ > +//------------------------------------------------------------------------------ > +// > +// CpuBreakpoint for RISC-V > +// > +// Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> +// > +// This program and the accompanying materials > +// are licensed and made available under the terms and conditions of the BSD License > +// which accompanies this distribution. The full text of the license may be found at > +// http://opensource.org/licenses/bsd-license.php. > +// > +// THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > +// WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +// > +//------------------------------------------------------------------------------ > + > +ASM_GLOBAL ASM_PFX(RiscVCpuBreakpoint) > +ASM_PFX(RiscVCpuBreakpoint): > + ebreak > + ret > diff --git a/MdePkg/Library/BaseLib/RiscV64/RiscVCpuPause.S b/MdePkg/Library/BaseLib/RiscV64/RiscVCpuPause.S > new file mode 100644 > index 0000000..64b9fb5 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/RiscVCpuPause.S > @@ -0,0 +1,20 @@ > +//------------------------------------------------------------------------------ > +// > +// CpuPause for RISC-V > +// > +// Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> +// > +// This program and the accompanying materials > +// are licensed and made available under the terms and conditions of the BSD License > +// which accompanies this distribution. The full text of the license may be found at > +// http://opensource.org/licenses/bsd-license.php. > +// > +// THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > +// WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +// > +//------------------------------------------------------------------------------ > + > +ASM_GLOBAL ASM_PFX(RiscVCpuPause) > +ASM_PFX(RiscVCpuPause): > + nop > + ret > diff --git a/MdePkg/Library/BaseLib/RiscV64/RiscVInterrupt.S b/MdePkg/Library/BaseLib/RiscV64/RiscVInterrupt.S > new file mode 100644 > index 0000000..5782ced > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/RiscVInterrupt.S > @@ -0,0 +1,33 @@ > +//------------------------------------------------------------------------------ > +// > +// Cpu interrupt enable/disable for RISC-V > +// > +// Copyright (c) 2016 - 2019, Hewlett Packard Enterprise Development LP. All rights reserved.
> +// > +// This program and the accompanying materials > +// are licensed and made available under the terms and conditions of the BSD License > +// which accompanies this distribution. The full text of the license may be found at > +// http://opensource.org/licenses/bsd-license.php. > +// > +// THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > +// WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +// > +//------------------------------------------------------------------------------ > + > +ASM_GLOBAL ASM_PFX(RiscVDisableInterrupts) > +ASM_GLOBAL ASM_PFX(RiscVEnableInterrupt) > +ASM_GLOBAL ASM_PFX(RiscVGetInterrupts) > + > +ASM_PFX(RiscVDisableInterrupts): > + li a1, 0xaaa > + csrc 0x304, a1 > + ret > + > +ASM_PFX(RiscVEnableInterrupt): > + li a1, 0x80 > + csrs 0x304, a1 > + ret > + > +ASM_PFX(RiscVGetInterrupts): > + csrr a0, 0x304 > + ret > diff --git a/MdePkg/Library/BaseLib/RiscV64/RiscVSetJumpLongJump.S b/MdePkg/Library/BaseLib/RiscV64/RiscVSetJumpLongJump.S > new file mode 100644 > index 0000000..bd75408 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/RiscVSetJumpLongJump.S > @@ -0,0 +1,61 @@ > +//------------------------------------------------------------------------------ > +// > +// Set/Long jump for RISC-V > +// > +// Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> +// > +// This program and the accompanying materials > +// are licensed and made available under the terms and conditions of the BSD License > +// which accompanies this distribution. The full text of the license may be found at > +// http://opensource.org/licenses/bsd-license.php. > +// > +// THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > +// WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +// > +//------------------------------------------------------------------------------ > +# define REG_S sd > +# define REG_L ld > +# define SZREG 8 > +.align 3 > + .globl SetJump > + > +SetJump: > + REG_S ra, 0*SZREG(a0) > + REG_S s0, 1*SZREG(a0) > + REG_S s1, 2*SZREG(a0) > + REG_S s2, 3*SZREG(a0) > + REG_S s3, 4*SZREG(a0) > + REG_S s4, 5*SZREG(a0) > + REG_S s5, 6*SZREG(a0) > + REG_S s6, 7*SZREG(a0) > + REG_S s7, 8*SZREG(a0) > + REG_S s8, 9*SZREG(a0) > + REG_S s9, 10*SZREG(a0) > + REG_S s10,11*SZREG(a0) > + REG_S s11,12*SZREG(a0) > + REG_S sp, 13*SZREG(a0) > + li a0, 0 > + ret > + > + .globl InternalLongJump > +InternalLongJump: > + REG_L ra, 0*SZREG(a0) > + REG_L s0, 1*SZREG(a0) > + REG_L s1, 2*SZREG(a0) > + REG_L s2, 3*SZREG(a0) > + REG_L s3, 4*SZREG(a0) > + REG_L s4, 5*SZREG(a0) > + REG_L s5, 6*SZREG(a0) > + REG_L s6, 7*SZREG(a0) > + REG_L s7, 8*SZREG(a0) > + REG_L s8, 9*SZREG(a0) > + REG_L s9, 10*SZREG(a0) > + REG_L s10,11*SZREG(a0) > + REG_L s11,12*SZREG(a0) > + REG_L sp, 13*SZREG(a0) > + > + add a0, s0, 0 > + add a1, s1, 0 > + add a2, s2, 0 > + add a3, s3, 0 > + ret > diff --git a/MdePkg/Library/BaseLib/RiscV64/Unaligned.c b/MdePkg/Library/BaseLib/RiscV64/Unaligned.c > new file mode 100644 > index 0000000..7068a63 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/Unaligned.c > @@ -0,0 +1,270 @@ > +/** @file > + RISC-V specific functionality for (un)aligned memory read/write. > + > + Copyright (c) 2016, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > +**/ > + > +#include "BaseLibInternals.h" > + > +/** > + Reads a 16-bit value from memory that may be unaligned. > + > + This function returns the 16-bit value pointed to by Buffer. The function > + guarantees that the read operation does not produce an alignment fault. > + > + If the Buffer is NULL, then ASSERT(). > + > + @param Buffer A pointer to a 16-bit value that may be unaligned. > + > + @return The 16-bit value read from Buffer. > + > +**/ > +UINT16 > +EFIAPI > +ReadUnaligned16 ( > + IN CONST UINT16 *Buffer > + ) > +{ > + UINT16 Value; > + INT8 Count; > + > + ASSERT (Buffer != NULL); > + > + for (Count = sizeof (UINT16) - 1, Value = 0; Count >= 0 ; Count --) { > + Value = Value << 8; > + Value |= *((UINT8*)Buffer + Count); > + } > + return Value; > +} > + > +/** > + Writes a 16-bit value to memory that may be unaligned. > + > + This function writes the 16-bit value specified by Value to Buffer. Value is > + returned. The function guarantees that the write operation does not produce > + an alignment fault. > + > + If the Buffer is NULL, then ASSERT(). > + > + @param Buffer A pointer to a 16-bit value that may be unaligned. > + @param Value 16-bit value to write to Buffer. > + > + @return The 16-bit value to write to Buffer. > + > +**/ > +UINT16 > +EFIAPI > +WriteUnaligned16 ( > + OUT UINT16 *Buffer, > + IN UINT16 Value > + ) > +{ > + INT8 Count; > + UINT16 ValueTemp; > + > + ASSERT (Buffer != NULL); > + > + for (Count = 0, ValueTemp = Value; Count < sizeof (UINT16) ; Count ++) { > + *((UINT8*)Buffer + Count) = (UINT8)(ValueTemp & 0xff); > + ValueTemp = ValueTemp >> 8; > + } > + return Value; > +} > + > +/** > + Reads a 24-bit value from memory that may be unaligned. > + > + This function returns the 24-bit value pointed to by Buffer. The function > + guarantees that the read operation does not produce an alignment fault. > + > + If the Buffer is NULL, then ASSERT(). > + > + @param Buffer A pointer to a 24-bit value that may be unaligned. > + > + @return The 24-bit value read from Buffer. > + > +**/ > +UINT32 > +EFIAPI > +ReadUnaligned24 ( > + IN CONST UINT32 *Buffer > + ) > +{ > + UINT32 Value; > + INT8 Count; > + > + ASSERT (Buffer != NULL); > + for (Count = 2, Value = 0; Count >= 0 ; Count --) { > + Value = Value << 8; > + Value |= *((UINT8*)Buffer + Count); > + } > + return Value; > +} > + > +/** > + Writes a 24-bit value to memory that may be unaligned. > + > + This function writes the 24-bit value specified by Value to Buffer. Value is > + returned. The function guarantees that the write operation does not produce > + an alignment fault. > + > + If the Buffer is NULL, then ASSERT(). > + > + @param Buffer A pointer to a 24-bit value that may be unaligned. > + @param Value 24-bit value to write to Buffer. > + > + @return The 24-bit value to write to Buffer. > + > +**/ > +UINT32 > +EFIAPI > +WriteUnaligned24 ( > + OUT UINT32 *Buffer, > + IN UINT32 Value > + ) > +{ > + INT8 Count; > + UINT32 ValueTemp; > + > + ASSERT (Buffer != NULL); > + for (Count = 0, ValueTemp = Value; Count < 3 ; Count ++) { > + *((UINT8*)Buffer + Count) = (UINT8)(ValueTemp & 0xff); > + ValueTemp = ValueTemp >> 8; > + } > + return Value; > +} > + > +/** > + Reads a 32-bit value from memory that may be unaligned. > + > + This function returns the 32-bit value pointed to by Buffer. The function > + guarantees that the read operation does not produce an alignment fault. > + > + If the Buffer is NULL, then ASSERT(). > + > + @param Buffer A pointer to a 32-bit value that may be unaligned. > + > + @return The 32-bit value read from Buffer. > + > +**/ > +UINT32 > +EFIAPI > +ReadUnaligned32 ( > + IN CONST UINT32 *Buffer > + ) > +{ > + UINT32 Value; > + INT8 Count; > + > + ASSERT (Buffer != NULL); > + > + for (Count = sizeof (UINT32) - 1, Value = 0; Count >= 0 ; Count --) { > + Value = Value << 8; > + Value |= *((UINT8*)Buffer + Count); > + } > + return Value; > +} > + > +/** > + Writes a 32-bit value to memory that may be unaligned. > + > + This function writes the 32-bit value specified by Value to Buffer. Value is > + returned. The function guarantees that the write operation does not produce > + an alignment fault. > + > + If the Buffer is NULL, then ASSERT(). > + > + @param Buffer A pointer to a 32-bit value that may be unaligned. > + @param Value The 32-bit value to write to Buffer. > + > + @return The 32-bit value to write to Buffer. > + > +**/ > +UINT32 > +EFIAPI > +WriteUnaligned32 ( > + OUT UINT32 *Buffer, > + IN UINT32 Value > + ) > +{ > + INT8 Count; > + UINT32 ValueTemp; > + > + ASSERT (Buffer != NULL); > + for (Count = 0, ValueTemp = Value; Count < sizeof (UINT32) ; Count ++) { > + *((UINT8*)Buffer + Count) = (UINT8)(ValueTemp & 0xff); > + ValueTemp = ValueTemp >> 8; > + } > + return Value; > +} > + > +/** > + Reads a 64-bit value from memory that may be unaligned. > + > + This function returns the 64-bit value pointed to by Buffer. The function > + guarantees that the read operation does not produce an alignment fault. > + > + If the Buffer is NULL, then ASSERT(). > + > + @param Buffer A pointer to a 64-bit value that may be unaligned. > + > + @return The 64-bit value read from Buffer. > + > +**/ > +UINT64 > +EFIAPI > +ReadUnaligned64 ( > + IN CONST UINT64 *Buffer > + ) > +{ > + UINT64 Value; > + INT8 Count; > + > + ASSERT (Buffer != NULL); > + for (Count = sizeof (UINT64) - 1, Value = 0; Count >= 0 ; Count --) { > + Value = Value << 8; > + Value |= *((UINT8*)Buffer + Count); > + } > + return Value; > +} > + > +/** > + Writes a 64-bit value to memory that may be unaligned. > + > + This function writes the 64-bit value specified by Value to Buffer. Value is > + returned. The function guarantees that the write operation does not produce > + an alignment fault. > + > + If the Buffer is NULL, then ASSERT(). > + > + @param Buffer A pointer to a 64-bit value that may be unaligned. > + @param Value The 64-bit value to write to Buffer. > + > + @return The 64-bit value to write to Buffer. > + > +**/ > +UINT64 > +EFIAPI > +WriteUnaligned64 ( > + OUT UINT64 *Buffer, > + IN UINT64 Value > + ) > +{ > + INT8 Count; > + UINT64 ValueTemp; > + > + ASSERT (Buffer != NULL); > + for (Count = 0, ValueTemp = Value; Count < sizeof (UINT64) ; Count ++) { > + *((UINT8*)Buffer + Count) = (UINT8)(ValueTemp & 0xff); > + ValueTemp = ValueTemp >> 8; > + } > + return Value; > +} > diff --git a/MdePkg/Library/BaseLib/RiscV64/riscv_asm.h b/MdePkg/Library/BaseLib/RiscV64/riscv_asm.h > new file mode 100644 > index 0000000..b050742 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/riscv_asm.h > @@ -0,0 +1,194 @@ > +/** @file > + Macro definitions of RISC-V CSR assembly. > + > + Copyright (c) 2019, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > + > + SPDX-License-Identifier: BSD-2-Clause > + > + Copyright (c) 2019 Western Digital Corporation or its affiliates. > + > +**/ > + > +#ifndef __RISCV_ASM_H__ > +#define __RISCV_ASM_H__ > + > +#include "riscv_encoding.h" > + > +#ifdef __ASSEMBLY__ > +#define __ASM_STR(x) x > +#else > +#define __ASM_STR(x) #x > +#endif > + > +#if __riscv_xlen == 64 > +#define __REG_SEL(a, b) __ASM_STR(a) > +#elif __riscv_xlen == 32 > +#define __REG_SEL(a, b) __ASM_STR(b) > +#else > +#error "Unexpected __riscv_xlen" > +#endif > + > +#define PAGE_SHIFT (12) > +#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT) > +#define PAGE_MASK (~(PAGE_SIZE - 1)) > +#define SBI_TLB_FLUSH_ALL ((unsigned long)-1) > + > +#define REG_L __REG_SEL(ld, lw) > +#define REG_S __REG_SEL(sd, sw) > +#define SZREG __REG_SEL(8, 4) > +#define LGREG __REG_SEL(3, 2) > + > +#if __SIZEOF_POINTER__ == 8 > +#define BITS_PER_LONG 64 > +#ifdef __ASSEMBLY__ > +#define RISCV_PTR .dword > +#define RISCV_SZPTR 8 > +#define RISCV_LGPTR 3 > +#else > +#define RISCV_PTR ".dword" > +#define RISCV_SZPTR "8" > +#define RISCV_LGPTR "3" > +#endif > +#elif __SIZEOF_POINTER__ == 4 > +#define BITS_PER_LONG 32 > +#ifdef __ASSEMBLY__ > +#define RISCV_PTR .word > +#define RISCV_SZPTR 4 > +#define RISCV_LGPTR 2 > +#else > +#define RISCV_PTR ".word" > +#define RISCV_SZPTR "4" > +#define RISCV_LGPTR "2" > +#endif > +#else > +#error "Unexpected __SIZEOF_POINTER__" > +#endif > + > +#if (__SIZEOF_INT__ == 4) > +#define RISCV_INT __ASM_STR(.word) > +#define RISCV_SZINT __ASM_STR(4) > +#define RISCV_LGINT __ASM_STR(2) > +#else > +#error "Unexpected __SIZEOF_INT__" > +#endif > + > +#if (__SIZEOF_SHORT__ == 2) > +#define RISCV_SHORT __ASM_STR(.half) > +#define RISCV_SZSHORT __ASM_STR(2) > +#define RISCV_LGSHORT __ASM_STR(1) > +#else > +#error "Unexpected __SIZEOF_SHORT__" > +#endif > + > +#ifndef __ASSEMBLY__ > + > +#define csr_swap(csr, val) \ > +({ \ > + unsigned long __v = (unsigned long)(val); \ > + __asm__ __volatile__ ("csrrw %0, " __ASM_STR(csr) ", %1"\ > + : "=r" (__v) : "rK" (__v) \ > + : "memory"); \ > + __v; \ > +}) > + > +#define csr_read(csr) \ > +({ \ > + register unsigned long __v; \ > + __asm__ __volatile__ ("csrr %0, " __ASM_STR(csr) \ > + : "=r" (__v) : \ > + : "memory"); \ > + __v; \ > +}) > + > +#define csr_write(csr, val) \ > +({ \ > + unsigned long __v = (unsigned long)(val); \ > + __asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0" \ > + : : "rK" (__v) \ > + : "memory"); \ > +}) > + > +#define csr_read_set(csr, val) \ > +({ \ > + unsigned long __v = (unsigned long)(val); \ > + __asm__ __volatile__ ("csrrs %0, " __ASM_STR(csr) ", %1"\ > + : "=r" (__v) : "rK" (__v) \ > + : "memory"); \ > + __v; \ > +}) > + > +#define csr_set(csr, val) \ > +({ \ > + unsigned long __v = (unsigned long)(val); \ > + __asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0" \ > + : : "rK" (__v) \ > + : "memory"); \ > +}) > + > +#define csr_read_clear(csr, val) \ > +({ \ > + unsigned long __v = (unsigned long)(val); \ > + __asm__ __volatile__ ("csrrc %0, " __ASM_STR(csr) ", %1"\ > + : "=r" (__v) : "rK" (__v) \ > + : "memory"); \ > + __v; \ > +}) > + > +#define csr_clear(csr, val) \ > +({ \ > + unsigned long __v = (unsigned long)(val); \ > + __asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0" \ > + : : "rK" (__v) \ > + : "memory"); \ > +}) > + > +unsigned long csr_read_num(int csr_num); > + > +void csr_write_num(int csr_num, unsigned long val); > + > +#define wfi() \ > +do { \ > + __asm__ __volatile__ ("wfi" ::: "memory"); \ > +} while (0) > + > +static inline int misa_extension(char ext) > +{ > + return csr_read(CSR_MISA) & (1 << (ext - 'A')); > +} > + > +static inline int misa_xlen(void) > +{ > + return ((long)csr_read(CSR_MISA) < 0) ? 64 : 32; > +} > + > +static inline void misa_string(char *out, unsigned int out_sz) > +{ > + unsigned long i, val = csr_read(CSR_MISA); > + > + for (i = 0; i < 26; i++) { > + if (val & (1 << i)) { > + *out = 'A' + i; > + out++; > + } > + } > + *out = '\0'; > + out++; > +} > + > +int pmp_set(unsigned int n, unsigned long prot, > + unsigned long addr, unsigned long log2len); > + > +int pmp_get(unsigned int n, unsigned long *prot_out, > + unsigned long *addr_out, unsigned long *log2len_out); > + > +#endif /* !__ASSEMBLY__ */ > + > +#endif > diff --git a/MdePkg/Library/BaseLib/RiscV64/riscv_encoding.h b/MdePkg/Library/BaseLib/RiscV64/riscv_encoding.h > new file mode 100644 > index 0000000..6f5fefd > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/riscv_encoding.h > @@ -0,0 +1,574 @@ > +/** @file > + Definitions of RISC-V CSR. > + > + Copyright (c) 2019, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > + > + SPDX-License-Identifier: BSD-2-Clause > + > + Copyright (c) 2019 Western Digital Corporation or its affiliates. > + > +**/ > + > +#ifndef __RISCV_ENCODING_H__ > +#define __RISCV_ENCODING_H__ > + > +#include "sbi_const.h" > + > +/* TODO: Make constants usable in assembly with _AC() macro */ > + > +#define MSTATUS_UIE 0x00000001 > +#define MSTATUS_SIE 0x00000002 > +#define MSTATUS_HIE 0x00000004 > +#define MSTATUS_MIE 0x00000008 > +#define MSTATUS_UPIE 0x00000010 > +#define MSTATUS_SPIE_SHIFT 5 > +#define MSTATUS_SPIE (1UL << MSTATUS_SPIE_SHIFT) > +#define MSTATUS_HPIE 0x00000040 > +#define MSTATUS_MPIE 0x00000080 > +#define MSTATUS_SPP_SHIFT 8 > +#define MSTATUS_SPP (1UL << MSTATUS_SPP_SHIFT) > +#define MSTATUS_HPP 0x00000600 > +#define MSTATUS_MPP_SHIFT 11 > +#define MSTATUS_MPP (3UL << MSTATUS_MPP_SHIFT) > +#define MSTATUS_FS 0x00006000 > +#define MSTATUS_XS 0x00018000 > +#define MSTATUS_MPRV 0x00020000 > +#define MSTATUS_SUM 0x00040000 > +#define MSTATUS_MXR 0x00080000 > +#define MSTATUS_TVM 0x00100000 > +#define MSTATUS_TW 0x00200000 > +#define MSTATUS_TSR 0x00400000 > +#define MSTATUS32_SD 0x80000000 > +#define MSTATUS_UXL 0x0000000300000000 > +#define MSTATUS_SXL 0x0000000C00000000 > +#define MSTATUS64_SD 0x8000000000000000 > + > +#define SSTATUS_UIE 0x00000001 > +#define SSTATUS_SIE 0x00000002 > +#define SSTATUS_UPIE 0x00000010 > +#define SSTATUS_SPIE 0x00000020 > +#define SSTATUS_SPP 0x00000100 > +#define SSTATUS_FS 0x00006000 > +#define SSTATUS_XS 0x00018000 > +#define SSTATUS_SUM 0x00040000 > +#define SSTATUS_MXR 0x00080000 > +#define SSTATUS32_SD 0x80000000 > +#define SSTATUS_UXL 0x0000000300000000 > +#define SSTATUS64_SD 0x8000000000000000 > + > +#define DCSR_XDEBUGVER (3U<<30) > +#define DCSR_NDRESET (1<<29) > +#define DCSR_FULLRESET (1<<28) > +#define DCSR_EBREAKM (1<<15) > +#define DCSR_EBREAKH (1<<14) > +#define DCSR_EBREAKS (1<<13) > +#define DCSR_EBREAKU (1<<12) > +#define DCSR_STOPCYCLE (1<<10) > +#define DCSR_STOPTIME (1<<9) > +#define DCSR_CAUSE (7<<6) > +#define DCSR_DEBUGINT (1<<5) > +#define DCSR_HALT (1<<3) > +#define DCSR_STEP (1<<2) > +#define DCSR_PRV (3<<0) > + > +#define DCSR_CAUSE_NONE 0 > +#define DCSR_CAUSE_SWBP 1 > +#define DCSR_CAUSE_HWBP 2 > +#define DCSR_CAUSE_DEBUGINT 3 > +#define DCSR_CAUSE_STEP 4 > +#define DCSR_CAUSE_HALT 5 > + > +#define MCONTROL_TYPE(xlen) (0xfULL<<((xlen)-4)) > +#define MCONTROL_DMODE(xlen) (1ULL<<((xlen)-5)) > +#define MCONTROL_MASKMAX(xlen) (0x3fULL<<((xlen)-11)) > + > +#define MCONTROL_SELECT (1<<19) > +#define MCONTROL_TIMING (1<<18) > +#define MCONTROL_ACTION (0x3f<<12) > +#define MCONTROL_CHAIN (1<<11) > +#define MCONTROL_MATCH (0xf<<7) > +#define MCONTROL_M (1<<6) > +#define MCONTROL_H (1<<5) > +#define MCONTROL_S (1<<4) > +#define MCONTROL_U (1<<3) > +#define MCONTROL_EXECUTE (1<<2) > +#define MCONTROL_STORE (1<<1) > +#define MCONTROL_LOAD (1<<0) > + > +#define MCONTROL_TYPE_NONE 0 > +#define MCONTROL_TYPE_MATCH 2 > + > +#define MCONTROL_ACTION_DEBUG_EXCEPTION 0 > +#define MCONTROL_ACTION_DEBUG_MODE 1 > +#define MCONTROL_ACTION_TRACE_START 2 > +#define MCONTROL_ACTION_TRACE_STOP 3 > +#define MCONTROL_ACTION_TRACE_EMIT 4 > + > +#define MCONTROL_MATCH_EQUAL 0 > +#define MCONTROL_MATCH_NAPOT 1 > +#define MCONTROL_MATCH_GE 2 > +#define MCONTROL_MATCH_LT 3 > +#define MCONTROL_MATCH_MASK_LOW 4 > +#define MCONTROL_MATCH_MASK_HIGH 5 > + > +#define IRQ_S_SOFT 1 > +#define IRQ_H_SOFT 2 > +#define IRQ_M_SOFT 3 > +#define IRQ_S_TIMER 5 > +#define IRQ_H_TIMER 6 > +#define IRQ_M_TIMER 7 > +#define IRQ_S_EXT 9 > +#define IRQ_H_EXT 10 > +#define IRQ_M_EXT 11 > +#define IRQ_COP 12 > +#define IRQ_HOST 13 > + > +#define MIP_SSIP (1 << IRQ_S_SOFT) > +#define MIP_HSIP (1 << IRQ_H_SOFT) > +#define MIP_MSIP (1 << IRQ_M_SOFT) > +#define MIP_STIP (1 << IRQ_S_TIMER) > +#define MIP_HTIP (1 << IRQ_H_TIMER) > +#define MIP_MTIP (1 << IRQ_M_TIMER) > +#define MIP_SEIP (1 << IRQ_S_EXT) > +#define MIP_HEIP (1 << IRQ_H_EXT) > +#define MIP_MEIP (1 << IRQ_M_EXT) > + > +#define SIP_SSIP MIP_SSIP > +#define SIP_STIP MIP_STIP > + > +#define PRV_U 0 > +#define PRV_S 1 > +#define PRV_H 2 > +#define PRV_M 3 > + > +#define SATP32_MODE 0x80000000 > +#define SATP32_ASID 0x7FC00000 > +#define SATP32_PPN 0x003FFFFF > +#define SATP64_MODE 0xF000000000000000 > +#define SATP64_ASID 0x0FFFF00000000000 > +#define SATP64_PPN 0x00000FFFFFFFFFFF > + > +#define SATP_MODE_OFF 0 > +#define SATP_MODE_SV32 1 > +#define SATP_MODE_SV39 8 > +#define SATP_MODE_SV48 9 > +#define SATP_MODE_SV57 10 > +#define SATP_MODE_SV64 11 > + > +#define PMP_R 0x01 > +#define PMP_W 0x02 > +#define PMP_X 0x04 > +#define PMP_A 0x18 > +#define PMP_A_TOR 0x08 > +#define PMP_A_NA4 0x10 > +#define PMP_A_NAPOT 0x18 > +#define PMP_L 0x80 > + > +#define PMP_SHIFT 2 > +#define PMP_COUNT 16 > + > +/* page table entry (PTE) fields */ > +#define PTE_V 0x001 /* Valid */ > +#define PTE_R 0x002 /* Read */ > +#define PTE_W 0x004 /* Write */ > +#define PTE_X 0x008 /* Execute */ > +#define PTE_U 0x010 /* User */ > +#define PTE_G 0x020 /* Global */ > +#define PTE_A 0x040 /* Accessed */ > +#define PTE_D 0x080 /* Dirty */ > +#define PTE_SOFT 0x300 /* Reserved for Software */ > + > +#define PTE_PPN_SHIFT 10 > + > +#define PTE_TABLE(PTE) \ > + (((PTE) & (PTE_V | PTE_R | PTE_W | PTE_X)) == PTE_V) > + > +#if __riscv_xlen == 64 > +#define MSTATUS_SD MSTATUS64_SD > +#define SSTATUS_SD SSTATUS64_SD > +#define RISCV_PGLEVEL_BITS 9 > +#define SATP_MODE SATP64_MODE > +#else > +#define MSTATUS_SD MSTATUS32_SD > +#define SSTATUS_SD SSTATUS32_SD > +#define RISCV_PGLEVEL_BITS 10 > +#define SATP_MODE SATP32_MODE > +#endif > +#define RISCV_PGSHIFT 12 > +#define RISCV_PGSIZE (1 << RISCV_PGSHIFT) > + > +#define CSR_USTATUS 0x0 > +#define CSR_FFLAGS 0x1 > +#define CSR_FRM 0x2 > +#define CSR_FCSR 0x3 > +#define CSR_CYCLE 0xc00 > +#define CSR_UIE 0x4 > +#define CSR_UTVEC 0x5 > +#define CSR_USCRATCH 0x40 > +#define CSR_UEPC 0x41 > +#define CSR_UCAUSE 0x42 > +#define CSR_UTVAL 0x43 > +#define CSR_UIP 0x44 > +#define CSR_TIME 0xc01 > +#define CSR_INSTRET 0xc02 > +#define CSR_HPMCOUNTER3 0xc03 > +#define CSR_HPMCOUNTER4 0xc04 > +#define CSR_HPMCOUNTER5 0xc05 > +#define CSR_HPMCOUNTER6 0xc06 > +#define CSR_HPMCOUNTER7 0xc07 > +#define CSR_HPMCOUNTER8 0xc08 > +#define CSR_HPMCOUNTER9 0xc09 > +#define CSR_HPMCOUNTER10 0xc0a > +#define CSR_HPMCOUNTER11 0xc0b > +#define CSR_HPMCOUNTER12 0xc0c > +#define CSR_HPMCOUNTER13 0xc0d > +#define CSR_HPMCOUNTER14 0xc0e > +#define CSR_HPMCOUNTER15 0xc0f > +#define CSR_HPMCOUNTER16 0xc10 > +#define CSR_HPMCOUNTER17 0xc11 > +#define CSR_HPMCOUNTER18 0xc12 > +#define CSR_HPMCOUNTER19 0xc13 > +#define CSR_HPMCOUNTER20 0xc14 > +#define CSR_HPMCOUNTER21 0xc15 > +#define CSR_HPMCOUNTER22 0xc16 > +#define CSR_HPMCOUNTER23 0xc17 > +#define CSR_HPMCOUNTER24 0xc18 > +#define CSR_HPMCOUNTER25 0xc19 > +#define CSR_HPMCOUNTER26 0xc1a > +#define CSR_HPMCOUNTER27 0xc1b > +#define CSR_HPMCOUNTER28 0xc1c > +#define CSR_HPMCOUNTER29 0xc1d > +#define CSR_HPMCOUNTER30 0xc1e > +#define CSR_HPMCOUNTER31 0xc1f > +#define CSR_SSTATUS 0x100 > +#define CSR_SIE 0x104 > +#define CSR_STVEC 0x105 > +#define CSR_SCOUNTEREN 0x106 > +#define CSR_SSCRATCH 0x140 > +#define CSR_SEPC 0x141 > +#define CSR_SCAUSE 0x142 > +#define CSR_STVAL 0x143 > +#define CSR_SIP 0x144 > +#define CSR_SATP 0x180 > +#define CSR_MSTATUS 0x300 > +#define CSR_MISA 0x301 > +#define CSR_MEDELEG 0x302 > +#define CSR_MIDELEG 0x303 > +#define CSR_MIE 0x304 > +#define CSR_MTVEC 0x305 > +#define CSR_MCOUNTEREN 0x306 > +#define CSR_MSCRATCH 0x340 > +#define CSR_MEPC 0x341 > +#define CSR_MCAUSE 0x342 > +#define CSR_MTVAL 0x343 > +#define CSR_MIP 0x344 > +#define CSR_PMPCFG0 0x3a0 > +#define CSR_PMPCFG1 0x3a1 > +#define CSR_PMPCFG2 0x3a2 > +#define CSR_PMPCFG3 0x3a3 > +#define CSR_PMPADDR0 0x3b0 > +#define CSR_PMPADDR1 0x3b1 > +#define CSR_PMPADDR2 0x3b2 > +#define CSR_PMPADDR3 0x3b3 > +#define CSR_PMPADDR4 0x3b4 > +#define CSR_PMPADDR5 0x3b5 > +#define CSR_PMPADDR6 0x3b6 > +#define CSR_PMPADDR7 0x3b7 > +#define CSR_PMPADDR8 0x3b8 > +#define CSR_PMPADDR9 0x3b9 > +#define CSR_PMPADDR10 0x3ba > +#define CSR_PMPADDR11 0x3bb > +#define CSR_PMPADDR12 0x3bc > +#define CSR_PMPADDR13 0x3bd > +#define CSR_PMPADDR14 0x3be > +#define CSR_PMPADDR15 0x3bf > +#define CSR_TSELECT 0x7a0 > +#define CSR_TDATA1 0x7a1 > +#define CSR_TDATA2 0x7a2 > +#define CSR_TDATA3 0x7a3 > +#define CSR_DCSR 0x7b0 > +#define CSR_DPC 0x7b1 > +#define CSR_DSCRATCH 0x7b2 > +#define CSR_MCYCLE 0xb00 > +#define CSR_MINSTRET 0xb02 > +#define CSR_MHPMCOUNTER3 0xb03 > +#define CSR_MHPMCOUNTER4 0xb04 > +#define CSR_MHPMCOUNTER5 0xb05 > +#define CSR_MHPMCOUNTER6 0xb06 > +#define CSR_MHPMCOUNTER7 0xb07 > +#define CSR_MHPMCOUNTER8 0xb08 > +#define CSR_MHPMCOUNTER9 0xb09 > +#define CSR_MHPMCOUNTER10 0xb0a > +#define CSR_MHPMCOUNTER11 0xb0b > +#define CSR_MHPMCOUNTER12 0xb0c > +#define CSR_MHPMCOUNTER13 0xb0d > +#define CSR_MHPMCOUNTER14 0xb0e > +#define CSR_MHPMCOUNTER15 0xb0f > +#define CSR_MHPMCOUNTER16 0xb10 > +#define CSR_MHPMCOUNTER17 0xb11 > +#define CSR_MHPMCOUNTER18 0xb12 > +#define CSR_MHPMCOUNTER19 0xb13 > +#define CSR_MHPMCOUNTER20 0xb14 > +#define CSR_MHPMCOUNTER21 0xb15 > +#define CSR_MHPMCOUNTER22 0xb16 > +#define CSR_MHPMCOUNTER23 0xb17 > +#define CSR_MHPMCOUNTER24 0xb18 > +#define CSR_MHPMCOUNTER25 0xb19 > +#define CSR_MHPMCOUNTER26 0xb1a > +#define CSR_MHPMCOUNTER27 0xb1b > +#define CSR_MHPMCOUNTER28 0xb1c > +#define CSR_MHPMCOUNTER29 0xb1d > +#define CSR_MHPMCOUNTER30 0xb1e > +#define CSR_MHPMCOUNTER31 0xb1f > +#define CSR_MHPMEVENT3 0x323 > +#define CSR_MHPMEVENT4 0x324 > +#define CSR_MHPMEVENT5 0x325 > +#define CSR_MHPMEVENT6 0x326 > +#define CSR_MHPMEVENT7 0x327 > +#define CSR_MHPMEVENT8 0x328 > +#define CSR_MHPMEVENT9 0x329 > +#define CSR_MHPMEVENT10 0x32a > +#define CSR_MHPMEVENT11 0x32b > +#define CSR_MHPMEVENT12 0x32c > +#define CSR_MHPMEVENT13 0x32d > +#define CSR_MHPMEVENT14 0x32e > +#define CSR_MHPMEVENT15 0x32f > +#define CSR_MHPMEVENT16 0x330 > +#define CSR_MHPMEVENT17 0x331 > +#define CSR_MHPMEVENT18 0x332 > +#define CSR_MHPMEVENT19 0x333 > +#define CSR_MHPMEVENT20 0x334 > +#define CSR_MHPMEVENT21 0x335 > +#define CSR_MHPMEVENT22 0x336 > +#define CSR_MHPMEVENT23 0x337 > +#define CSR_MHPMEVENT24 0x338 > +#define CSR_MHPMEVENT25 0x339 > +#define CSR_MHPMEVENT26 0x33a > +#define CSR_MHPMEVENT27 0x33b > +#define CSR_MHPMEVENT28 0x33c > +#define CSR_MHPMEVENT29 0x33d > +#define CSR_MHPMEVENT30 0x33e > +#define CSR_MHPMEVENT31 0x33f > +#define CSR_MVENDORID 0xf11 > +#define CSR_MARCHID 0xf12 > +#define CSR_MIMPID 0xf13 > +#define CSR_MHARTID 0xf14 > +#define CSR_CYCLEH 0xc80 > +#define CSR_TIMEH 0xc81 > +#define CSR_INSTRETH 0xc82 > +#define CSR_HPMCOUNTER3H 0xc83 > +#define CSR_HPMCOUNTER4H 0xc84 > +#define CSR_HPMCOUNTER5H 0xc85 > +#define CSR_HPMCOUNTER6H 0xc86 > +#define CSR_HPMCOUNTER7H 0xc87 > +#define CSR_HPMCOUNTER8H 0xc88 > +#define CSR_HPMCOUNTER9H 0xc89 > +#define CSR_HPMCOUNTER10H 0xc8a > +#define CSR_HPMCOUNTER11H 0xc8b > +#define CSR_HPMCOUNTER12H 0xc8c > +#define CSR_HPMCOUNTER13H 0xc8d > +#define CSR_HPMCOUNTER14H 0xc8e > +#define CSR_HPMCOUNTER15H 0xc8f > +#define CSR_HPMCOUNTER16H 0xc90 > +#define CSR_HPMCOUNTER17H 0xc91 > +#define CSR_HPMCOUNTER18H 0xc92 > +#define CSR_HPMCOUNTER19H 0xc93 > +#define CSR_HPMCOUNTER20H 0xc94 > +#define CSR_HPMCOUNTER21H 0xc95 > +#define CSR_HPMCOUNTER22H 0xc96 > +#define CSR_HPMCOUNTER23H 0xc97 > +#define CSR_HPMCOUNTER24H 0xc98 > +#define CSR_HPMCOUNTER25H 0xc99 > +#define CSR_HPMCOUNTER26H 0xc9a > +#define CSR_HPMCOUNTER27H 0xc9b > +#define CSR_HPMCOUNTER28H 0xc9c > +#define CSR_HPMCOUNTER29H 0xc9d > +#define CSR_HPMCOUNTER30H 0xc9e > +#define CSR_HPMCOUNTER31H 0xc9f > +#define CSR_MCYCLEH 0xb80 > +#define CSR_MINSTRETH 0xb82 > +#define CSR_MHPMCOUNTER3H 0xb83 > +#define CSR_MHPMCOUNTER4H 0xb84 > +#define CSR_MHPMCOUNTER5H 0xb85 > +#define CSR_MHPMCOUNTER6H 0xb86 > +#define CSR_MHPMCOUNTER7H 0xb87 > +#define CSR_MHPMCOUNTER8H 0xb88 > +#define CSR_MHPMCOUNTER9H 0xb89 > +#define CSR_MHPMCOUNTER10H 0xb8a > +#define CSR_MHPMCOUNTER11H 0xb8b > +#define CSR_MHPMCOUNTER12H 0xb8c > +#define CSR_MHPMCOUNTER13H 0xb8d > +#define CSR_MHPMCOUNTER14H 0xb8e > +#define CSR_MHPMCOUNTER15H 0xb8f > +#define CSR_MHPMCOUNTER16H 0xb90 > +#define CSR_MHPMCOUNTER17H 0xb91 > +#define CSR_MHPMCOUNTER18H 0xb92 > +#define CSR_MHPMCOUNTER19H 0xb93 > +#define CSR_MHPMCOUNTER20H 0xb94 > +#define CSR_MHPMCOUNTER21H 0xb95 > +#define CSR_MHPMCOUNTER22H 0xb96 > +#define CSR_MHPMCOUNTER23H 0xb97 > +#define CSR_MHPMCOUNTER24H 0xb98 > +#define CSR_MHPMCOUNTER25H 0xb99 > +#define CSR_MHPMCOUNTER26H 0xb9a > +#define CSR_MHPMCOUNTER27H 0xb9b > +#define CSR_MHPMCOUNTER28H 0xb9c > +#define CSR_MHPMCOUNTER29H 0xb9d > +#define CSR_MHPMCOUNTER30H 0xb9e > +#define CSR_MHPMCOUNTER31H 0xb9f > + > +#define CAUSE_MISALIGNED_FETCH 0x0 > +#define CAUSE_FETCH_ACCESS 0x1 > +#define CAUSE_ILLEGAL_INSTRUCTION 0x2 > +#define CAUSE_BREAKPOINT 0x3 > +#define CAUSE_MISALIGNED_LOAD 0x4 > +#define CAUSE_LOAD_ACCESS 0x5 > +#define CAUSE_MISALIGNED_STORE 0x6 > +#define CAUSE_STORE_ACCESS 0x7 > +#define CAUSE_USER_ECALL 0x8 > +#define CAUSE_HYPERVISOR_ECALL 0x9 > +#define CAUSE_SUPERVISOR_ECALL 0xa > +#define CAUSE_MACHINE_ECALL 0xb > +#define CAUSE_FETCH_PAGE_FAULT 0xc > +#define CAUSE_LOAD_PAGE_FAULT 0xd > +#define CAUSE_STORE_PAGE_FAULT 0xf > + > +#define INSN_MATCH_LB 0x3 > +#define INSN_MASK_LB 0x707f > +#define INSN_MATCH_LH 0x1003 > +#define INSN_MASK_LH 0x707f > +#define INSN_MATCH_LW 0x2003 > +#define INSN_MASK_LW 0x707f > +#define INSN_MATCH_LD 0x3003 > +#define INSN_MASK_LD 0x707f > +#define INSN_MATCH_LBU 0x4003 > +#define INSN_MASK_LBU 0x707f > +#define INSN_MATCH_LHU 0x5003 > +#define INSN_MASK_LHU 0x707f > +#define INSN_MATCH_LWU 0x6003 > +#define INSN_MASK_LWU 0x707f > +#define INSN_MATCH_SB 0x23 > +#define INSN_MASK_SB 0x707f > +#define INSN_MATCH_SH 0x1023 > +#define INSN_MASK_SH 0x707f > +#define INSN_MATCH_SW 0x2023 > +#define INSN_MASK_SW 0x707f > +#define INSN_MATCH_SD 0x3023 > +#define INSN_MASK_SD 0x707f > + > +#define INSN_MATCH_FLW 0x2007 > +#define INSN_MASK_FLW 0x707f > +#define INSN_MATCH_FLD 0x3007 > +#define INSN_MASK_FLD 0x707f > +#define INSN_MATCH_FLQ 0x4007 > +#define INSN_MASK_FLQ 0x707f > +#define INSN_MATCH_FSW 0x2027 > +#define INSN_MASK_FSW 0x707f > +#define INSN_MATCH_FSD 0x3027 > +#define INSN_MASK_FSD 0x707f > +#define INSN_MATCH_FSQ 0x4027 > +#define INSN_MASK_FSQ 0x707f > + > +#define INSN_MATCH_C_LD 0x6000 > +#define INSN_MASK_C_LD 0xe003 > +#define INSN_MATCH_C_SD 0xe000 > +#define INSN_MASK_C_SD 0xe003 > +#define INSN_MATCH_C_LW 0x4000 > +#define INSN_MASK_C_LW 0xe003 > +#define INSN_MATCH_C_SW 0xc000 > +#define INSN_MASK_C_SW 0xe003 > +#define INSN_MATCH_C_LDSP 0x6002 > +#define INSN_MASK_C_LDSP 0xe003 > +#define INSN_MATCH_C_SDSP 0xe002 > +#define INSN_MASK_C_SDSP 0xe003 > +#define INSN_MATCH_C_LWSP 0x4002 > +#define INSN_MASK_C_LWSP 0xe003 > +#define INSN_MATCH_C_SWSP 0xc002 > +#define INSN_MASK_C_SWSP 0xe003 > + > +#define INSN_MATCH_C_FLD 0x2000 > +#define INSN_MASK_C_FLD 0xe003 > +#define INSN_MATCH_C_FLW 0x6000 > +#define INSN_MASK_C_FLW 0xe003 > +#define INSN_MATCH_C_FSD 0xa000 > +#define INSN_MASK_C_FSD 0xe003 > +#define INSN_MATCH_C_FSW 0xe000 > +#define INSN_MASK_C_FSW 0xe003 > +#define INSN_MATCH_C_FLDSP 0x2002 > +#define INSN_MASK_C_FLDSP 0xe003 > +#define INSN_MATCH_C_FSDSP 0xa002 > +#define INSN_MASK_C_FSDSP 0xe003 > +#define INSN_MATCH_C_FLWSP 0x6002 > +#define INSN_MASK_C_FLWSP 0xe003 > +#define INSN_MATCH_C_FSWSP 0xe002 > +#define INSN_MASK_C_FSWSP 0xe003 > + > +#define INSN_LEN(insn) ((((insn) & 0x3) < 0x3) ? 2 : 4) > + > +#if __riscv_xlen == 64 > +#define LOG_REGBYTES 3 > +#else > +#define LOG_REGBYTES 2 > +#endif > +#define REGBYTES (1 << LOG_REGBYTES) > + > +#define SH_RD 7 > +#define SH_RS1 15 > +#define SH_RS2 20 > +#define SH_RS2C 2 > + > +#define RV_X(x, s, n) (((x) >> (s)) & ((1 << (n)) - 1)) > +#define RVC_LW_IMM(x) ((RV_X(x, 6, 1) << 2) | \ > + (RV_X(x, 10, 3) << 3) | \ > + (RV_X(x, 5, 1) << 6)) > +#define RVC_LD_IMM(x) ((RV_X(x, 10, 3) << 3) | \ > + (RV_X(x, 5, 2) << 6)) > +#define RVC_LWSP_IMM(x) ((RV_X(x, 4, 3) << 2) | \ > + (RV_X(x, 12, 1) << 5) | \ > + (RV_X(x, 2, 2) << 6)) > +#define RVC_LDSP_IMM(x) ((RV_X(x, 5, 2) << 3) | \ > + (RV_X(x, 12, 1) << 5) | \ > + (RV_X(x, 2, 3) << 6)) > +#define RVC_SWSP_IMM(x) ((RV_X(x, 9, 4) << 2) | \ > + (RV_X(x, 7, 2) << 6)) > +#define RVC_SDSP_IMM(x) ((RV_X(x, 10, 3) << 3) | \ > + (RV_X(x, 7, 3) << 6)) > +#define RVC_RS1S(insn) (8 + RV_X(insn, SH_RD, 3)) > +#define RVC_RS2S(insn) (8 + RV_X(insn, SH_RS2C, 3)) > +#define RVC_RS2(insn) RV_X(insn, SH_RS2C, 5) > + > +#define SHIFT_RIGHT(x, y) \ > + ((y) < 0 ? ((x) << -(y)) : ((x) >> (y))) > + > +#define REG_MASK \ > + ((1 << (5 + LOG_REGBYTES)) - (1 << LOG_REGBYTES)) > + > +#define REG_OFFSET(insn, pos) \ > + (SHIFT_RIGHT((insn), (pos) - LOG_REGBYTES) & REG_MASK) > + > +#define REG_PTR(insn, pos, regs) \ > + (ulong *)((ulong)(regs) + REG_OFFSET(insn, pos)) > + > +#define GET_RM(insn) (((insn) >> 12) & 7) > + > +#define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs)) > +#define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs)) > +#define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs)) > +#define GET_RS2S(insn, regs) (*REG_PTR(RVC_RS2S(insn), 0, regs)) > +#define GET_RS2C(insn, regs) (*REG_PTR(insn, SH_RS2C, regs)) > +#define GET_SP(regs) (*REG_PTR(2, 0, regs)) > +#define SET_RD(insn, regs, val) (*REG_PTR(insn, SH_RD, regs) = (val)) > +#define IMM_I(insn) ((s32)(insn) >> 20) > +#define IMM_S(insn) (((s32)(insn) >> 25 << 5) | \ > + (s32)(((insn) >> 7) & 0x1f)) > +#define MASK_FUNCT3 0x7000 > + > +#endif > diff --git a/MdePkg/Library/BaseLib/RiscV64/sbi_const.h b/MdePkg/Library/BaseLib/RiscV64/sbi_const.h > new file mode 100644 > index 0000000..e6868c4 > --- /dev/null > +++ b/MdePkg/Library/BaseLib/RiscV64/sbi_const.h > @@ -0,0 +1,53 @@ > +/** @file > + Definitions of RISC-V SBI constants. > + > + Copyright (c) 2019, Hewlett Packard Enterprise Development LP. All rights reserved.
> + > + This program and the accompanying materials > + are licensed and made available under the terms and conditions of the BSD License > + which accompanies this distribution. The full text of the license may be found at > + http://opensource.org/licenses/bsd-license.php > + > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. > + > + SPDX-License-Identifier: BSD-2-Clause > + > + Copyright (c) 2019 Western Digital Corporation or its affiliates. > + > +**/ > + > +#ifndef __SBI_CONST_H__ > +#define __SBI_CONST_H__ > + > +/* Some constant macros are used in both assembler and > + * C code. Therefore we cannot annotate them always with > + * 'UL' and other type specifiers unilaterally. We > + * use the following macros to deal with this. > + * > + * Similarly, _AT() will cast an expression with a type in C, but > + * leave it unchanged in asm. > + */ > + > +#ifdef __ASSEMBLY__ > +#define _AC(X,Y) X > +#define _AT(T,X) X > +#else > +#define __AC(X,Y) (X##Y) > +#define _AC(X,Y) __AC(X,Y) > +#define _AT(T,X) ((T)(X)) > +#endif > + > +#define _UL(x) (_AC(x, UL)) > +#define _ULL(x) (_AC(x, ULL)) > + > +#define _BITUL(x) (_UL(1) << (x)) > +#define _BITULL(x) (_ULL(1) << (x)) > + > +#define UL(x) (_UL(x)) > +#define ULL(x) (_ULL(x)) > + > +#define __STR(s) #s > +#define STRINGIFY(s) __STR(s) > + > +#endif > -- > 2.7.4 > > > >