* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
[not found] <1449471969-16949-1-git-send-email-ard.biesheuvel@linaro.org>
@ 2019-01-22 15:09 ` Laszlo Ersek
2019-01-22 15:33 ` Laszlo Ersek
2019-01-22 15:37 ` Ard Biesheuvel
0 siblings, 2 replies; 15+ messages in thread
From: Laszlo Ersek @ 2019-01-22 15:09 UTC (permalink / raw)
To: Ard Biesheuvel, Leif Lindholm, Cohen, Eugene
Cc: edk2-devel, tanxiaojun, Marc Zyngier, Christoffer Dall
Performing thread-necromancy on a patch from 2015, which is today known
as commit b7de7e3cab3f. Also CC'ing Christoffer and Marc.
Question at the bottom.
On 12/07/15 08:06, Ard Biesheuvel wrote:
> From: "Cohen, Eugene" <eugene@hp.com>
>
> This patch updates the ArmPkg variant of InvalidateInstructionCacheRange to
> flush the data cache only to the point of unification (PoU). This improves
> performance and also allows invalidation in scenarios where it would be
> inappropriate to flush to the point of coherency (like when executing code
> from L2 configured as cache-as-ram).
>
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Eugene Cohen <eugene@hp.com>
>
> Added AARCH64 and ARM/GCC implementations of the above.
>
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> ArmPkg/Include/Library/ArmLib.h | 8 +++++++-
> ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c | 2 +-
> ArmPkg/Library/ArmLib/AArch64/AArch64Support.S | 6 ++++++
> ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S | 6 ++++++
> ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm | 5 +++++
> 5 files changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/ArmPkg/Include/Library/ArmLib.h b/ArmPkg/Include/Library/ArmLib.h
> index 9622444ec63f..85fa1f600ba9 100644
> --- a/ArmPkg/Include/Library/ArmLib.h
> +++ b/ArmPkg/Include/Library/ArmLib.h
> @@ -183,12 +183,18 @@ ArmInvalidateDataCacheEntryByMVA (
>
> VOID
> EFIAPI
> -ArmCleanDataCacheEntryByMVA (
> +ArmCleanDataCacheEntryToPoUByMVA(
> IN UINTN Address
> );
>
> VOID
> EFIAPI
> +ArmCleanDataCacheEntryByMVA(
> +IN UINTN Address
> +);
> +
> +VOID
> +EFIAPI
> ArmCleanInvalidateDataCacheEntryByMVA (
> IN UINTN Address
> );
> diff --git a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> index feab4497ac1b..1045f9068f4d 100644
> --- a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> +++ b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> @@ -64,7 +64,7 @@ InvalidateInstructionCacheRange (
> IN UINTN Length
> )
> {
> - CacheRangeOperation (Address, Length, ArmCleanDataCacheEntryByMVA);
> + CacheRangeOperation (Address, Length, ArmCleanDataCacheEntryToPoUByMVA);
> ArmInvalidateInstructionCache ();
> return Address;
> }
> diff --git a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> index c530d19e897e..db21f73f0ed7 100644
> --- a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> +++ b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> @@ -22,6 +22,7 @@
> GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
> GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
> +GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
> GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryBySetWay)
> GCC_ASM_EXPORT (ArmCleanDataCacheEntryBySetWay)
> @@ -72,6 +73,11 @@ ASM_PFX(ArmCleanDataCacheEntryByMVA):
> ret
>
>
> +ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
> + dc cvau, x0 // Clean single data cache line to PoU
> + ret
> +
> +
> ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
> dc civac, x0 // Clean and invalidate single data cache line
> ret
> diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> index 5f030d92de31..7de1b11ef818 100644
> --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> @@ -19,6 +19,7 @@
> GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
> GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
> +GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
> GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryBySetWay)
> GCC_ASM_EXPORT (ArmCleanDataCacheEntryBySetWay)
> @@ -69,6 +70,11 @@ ASM_PFX(ArmCleanDataCacheEntryByMVA):
> bx lr
>
>
> +ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
> + mcr p15, 0, r0, c7, c11, 1 @clean single data cache line to PoU
> + bx lr
> +
> +
> ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
> mcr p15, 0, r0, c7, c14, 1 @clean and invalidate single data cache line
> bx lr
> diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> index df7e22dca2d9..a460bd2da7a9 100644
> --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> @@ -34,6 +34,11 @@ CTRL_I_BIT EQU (1 << 12)
> bx lr
>
>
> + RVCT_ASM_EXPORT ArmCleanDataCacheEntryToPoUByMVA
> + mcr p15, 0, r0, c7, c11, 1 ; clean single data cache line to PoU
> + bx lr
> +
> +
> RVCT_ASM_EXPORT ArmCleanInvalidateDataCacheEntryByMVA
> mcr p15, 0, r0, c7, c14, 1 ; clean and invalidate single data cache line
> bx lr
>
The implementation of the LoadImage boot service calls
InvalidateInstructionCacheRange():
CoreLoadImage() [MdeModulePkg/Core/Dxe/Image/Image.c]
CoreLoadImageCommon() [MdeModulePkg/Core/Dxe/Image/Image.c]
CoreLoadPeImage() [MdeModulePkg/Core/Dxe/Image/Image.c]
InvalidateInstructionCacheRange() [ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c]
Assuming that this code runs in a virtual machine, is any one the
following scenarios plausible?
(1) the dcache is flushed to PoU on physical socket #M, and the icache
is invalidated to PoU on physical socket #N, M being unequal to N
(2) the dcache cleaning + icache invalidation happen on the same
physical socket #M, but the image execution (via StartImage)
commences on socket #N
In either case, DRAM (PoC) would not form a "bridge" between the
physical CPUs sitting in said sockets. If the host kernel / KVM
re-scheduled VCPU#0 from socket #M to socket #N in the "middle" of the
above scenarios (1) and (2), could that result in StartImage executing
*not* what LoadImage loaded?
Because, we got a report
<https://bugzilla.redhat.com/show_bug.cgi?id=1666280> -- sorry, private
bug :/ -- in which the *very first* instruction of grub crashes. That
is, the instruction that forms the entry point of the image. The symptom
is:
> ESR : EC 0x21 IL 0x1 ISS 0x0000000E
>
> Instruction abort: Permission fault, second level
If I'm parsing the ARM ARM correctly:
- "ESR" stands for "Exception Syndrome Register".
- "EC 0x21" stands for Exception Class 0x21, "Instruction Abort taken
without a change of Exception level. Used for MMU faults generated by
instruction accesses, and for synchronous external aborts, including
synchronous parity errors."
- "ISS" stands for "Instruction Specific Syndrome".
- "ISS 0x00E" (24 bit value), given EC 0x21, stands for "Permission
fault, second level", as the log itself says.
The crash reproduces intermittently / unpredictably, and it happens on
unspecified hardware. Personally I couldn't reproduce it on my APM
Mustang A3, using the exact same guest ISO.
So, I'm thinking that, as soon as CoreStartImage() calls
Image->EntryPoint(), the instruction fetch blows up, because we didn't
clean the dcache / invalidate the icache *to PoC* in CoreLoadImage(),
and KVM moved VCPU#0 to a different physical socket meanwhile.
Am I totally lost? :)
(I plan to undo commit cf580da1bc4c ("ArmPkg/ArmLib: don't invalidate
entire I-cache on range operation", 2016-05-12) and commit b7de7e3cab3f
("ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU",
2015-12-08), and to ask the reporter to reproduce the issue like that,
but I figured I'd ask first.)
Thanks!
Laszlo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-22 15:09 ` [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU Laszlo Ersek
@ 2019-01-22 15:33 ` Laszlo Ersek
2019-01-22 15:37 ` Ard Biesheuvel
1 sibling, 0 replies; 15+ messages in thread
From: Laszlo Ersek @ 2019-01-22 15:33 UTC (permalink / raw)
To: Ard Biesheuvel, Leif Lindholm, Cohen, Eugene
Cc: edk2-devel, tanxiaojun, Marc Zyngier, Christoffer Dall
On 01/22/19 16:09, Laszlo Ersek wrote:
> (I plan to undo commit cf580da1bc4c ("ArmPkg/ArmLib: don't invalidate
> entire I-cache on range operation", 2016-05-12) and commit b7de7e3cab3f
> ("ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU",
> 2015-12-08), and to ask the reporter to reproduce the issue like that,
> but I figured I'd ask first.)
OK, I have to apologize, my lack of knowledge of aarch64 shows here. It
seems that the icache can only be invalidated to PoU (the ARM ARM only
lists IC IALLU / IALLUIS / IVAU), so undoing cf580da1bc4c should be useless.
Thanks,
Laszlo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-22 15:09 ` [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU Laszlo Ersek
2019-01-22 15:33 ` Laszlo Ersek
@ 2019-01-22 15:37 ` Ard Biesheuvel
2019-01-23 9:14 ` Laszlo Ersek
1 sibling, 1 reply; 15+ messages in thread
From: Ard Biesheuvel @ 2019-01-22 15:37 UTC (permalink / raw)
To: Laszlo Ersek
Cc: Leif Lindholm, Cohen, Eugene, edk2-devel@lists.01.org, Tanxiaojun,
Marc Zyngier, Christoffer Dall
On Tue, 22 Jan 2019 at 16:09, Laszlo Ersek <lersek@redhat.com> wrote:
>
> Performing thread-necromancy on a patch from 2015, which is today known
> as commit b7de7e3cab3f. Also CC'ing Christoffer and Marc.
>
> Question at the bottom.
>
> On 12/07/15 08:06, Ard Biesheuvel wrote:
> > From: "Cohen, Eugene" <eugene@hp.com>
> >
> > This patch updates the ArmPkg variant of InvalidateInstructionCacheRange to
> > flush the data cache only to the point of unification (PoU). This improves
> > performance and also allows invalidation in scenarios where it would be
> > inappropriate to flush to the point of coherency (like when executing code
> > from L2 configured as cache-as-ram).
> >
> > Contributed-under: TianoCore Contribution Agreement 1.0
> > Signed-off-by: Eugene Cohen <eugene@hp.com>
> >
> > Added AARCH64 and ARM/GCC implementations of the above.
> >
> > Contributed-under: TianoCore Contribution Agreement 1.0
> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > ---
> > ArmPkg/Include/Library/ArmLib.h | 8 +++++++-
> > ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c | 2 +-
> > ArmPkg/Library/ArmLib/AArch64/AArch64Support.S | 6 ++++++
> > ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S | 6 ++++++
> > ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm | 5 +++++
> > 5 files changed, 25 insertions(+), 2 deletions(-)
> >
> > diff --git a/ArmPkg/Include/Library/ArmLib.h b/ArmPkg/Include/Library/ArmLib.h
> > index 9622444ec63f..85fa1f600ba9 100644
> > --- a/ArmPkg/Include/Library/ArmLib.h
> > +++ b/ArmPkg/Include/Library/ArmLib.h
> > @@ -183,12 +183,18 @@ ArmInvalidateDataCacheEntryByMVA (
> >
> > VOID
> > EFIAPI
> > -ArmCleanDataCacheEntryByMVA (
> > +ArmCleanDataCacheEntryToPoUByMVA(
> > IN UINTN Address
> > );
> >
> > VOID
> > EFIAPI
> > +ArmCleanDataCacheEntryByMVA(
> > +IN UINTN Address
> > +);
> > +
> > +VOID
> > +EFIAPI
> > ArmCleanInvalidateDataCacheEntryByMVA (
> > IN UINTN Address
> > );
> > diff --git a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> > index feab4497ac1b..1045f9068f4d 100644
> > --- a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> > +++ b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> > @@ -64,7 +64,7 @@ InvalidateInstructionCacheRange (
> > IN UINTN Length
> > )
> > {
> > - CacheRangeOperation (Address, Length, ArmCleanDataCacheEntryByMVA);
> > + CacheRangeOperation (Address, Length, ArmCleanDataCacheEntryToPoUByMVA);
> > ArmInvalidateInstructionCache ();
> > return Address;
> > }
> > diff --git a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> > index c530d19e897e..db21f73f0ed7 100644
> > --- a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> > +++ b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> > @@ -22,6 +22,7 @@
> > GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
> > GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
> > GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
> > +GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
> > GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
> > GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryBySetWay)
> > GCC_ASM_EXPORT (ArmCleanDataCacheEntryBySetWay)
> > @@ -72,6 +73,11 @@ ASM_PFX(ArmCleanDataCacheEntryByMVA):
> > ret
> >
> >
> > +ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
> > + dc cvau, x0 // Clean single data cache line to PoU
> > + ret
> > +
> > +
> > ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
> > dc civac, x0 // Clean and invalidate single data cache line
> > ret
> > diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> > index 5f030d92de31..7de1b11ef818 100644
> > --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> > +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> > @@ -19,6 +19,7 @@
> > GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
> > GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
> > GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
> > +GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
> > GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
> > GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryBySetWay)
> > GCC_ASM_EXPORT (ArmCleanDataCacheEntryBySetWay)
> > @@ -69,6 +70,11 @@ ASM_PFX(ArmCleanDataCacheEntryByMVA):
> > bx lr
> >
> >
> > +ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
> > + mcr p15, 0, r0, c7, c11, 1 @clean single data cache line to PoU
> > + bx lr
> > +
> > +
> > ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
> > mcr p15, 0, r0, c7, c14, 1 @clean and invalidate single data cache line
> > bx lr
> > diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> > index df7e22dca2d9..a460bd2da7a9 100644
> > --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> > +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> > @@ -34,6 +34,11 @@ CTRL_I_BIT EQU (1 << 12)
> > bx lr
> >
> >
> > + RVCT_ASM_EXPORT ArmCleanDataCacheEntryToPoUByMVA
> > + mcr p15, 0, r0, c7, c11, 1 ; clean single data cache line to PoU
> > + bx lr
> > +
> > +
> > RVCT_ASM_EXPORT ArmCleanInvalidateDataCacheEntryByMVA
> > mcr p15, 0, r0, c7, c14, 1 ; clean and invalidate single data cache line
> > bx lr
> >
>
> The implementation of the LoadImage boot service calls
> InvalidateInstructionCacheRange():
>
> CoreLoadImage() [MdeModulePkg/Core/Dxe/Image/Image.c]
> CoreLoadImageCommon() [MdeModulePkg/Core/Dxe/Image/Image.c]
> CoreLoadPeImage() [MdeModulePkg/Core/Dxe/Image/Image.c]
> InvalidateInstructionCacheRange() [ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c]
>
> Assuming that this code runs in a virtual machine, is any one the
> following scenarios plausible?
>
> (1) the dcache is flushed to PoU on physical socket #M, and the icache
> is invalidated to PoU on physical socket #N, M being unequal to N
>
> (2) the dcache cleaning + icache invalidation happen on the same
> physical socket #M, but the image execution (via StartImage)
> commences on socket #N
>
> In either case, DRAM (PoC) would not form a "bridge" between the
> physical CPUs sitting in said sockets. If the host kernel / KVM
> re-scheduled VCPU#0 from socket #M to socket #N in the "middle" of the
> above scenarios (1) and (2), could that result in StartImage executing
> *not* what LoadImage loaded?
>
Cache maintenance by virtual address to the PoU is broadcast to the
inner shareable shareability domain. So this should be sufficient to
ensure that the CPU executing the instruction sees the correct data.
Our code uses full system barriers after each stage of cache
maintenance, so the code looks correct to me. However, it does rely on
the host firmware to configure the inner shareable shareability domain
correctly (and use the correct memory attributes for all memory it
maps)
> Because, we got a report
> <https://bugzilla.redhat.com/show_bug.cgi?id=1666280> -- sorry, private
> bug :/ -- in which the *very first* instruction of grub crashes. That
> is, the instruction that forms the entry point of the image. The symptom
> is:
>
> > ESR : EC 0x21 IL 0x1 ISS 0x0000000E
> >
> > Instruction abort: Permission fault, second level
>
> If I'm parsing the ARM ARM correctly:
>
> - "ESR" stands for "Exception Syndrome Register".
>
> - "EC 0x21" stands for Exception Class 0x21, "Instruction Abort taken
> without a change of Exception level. Used for MMU faults generated by
> instruction accesses, and for synchronous external aborts, including
> synchronous parity errors."
>
> - "ISS" stands for "Instruction Specific Syndrome".
>
> - "ISS 0x00E" (24 bit value), given EC 0x21, stands for "Permission
> fault, second level", as the log itself says.
>
This appears to be a TLB maintenance issue, not a cache maintenance
issue. Level 2 mappings are 2 MB block mappings, and so we usually
have smaller mappings for executables (unless GRUB is much larger than
2 MB and it covers an entire 2 MB naturally aligned block)
Does it happen in DEBUG mode? Is SetUefiImageMemoryAttributes() being
called to remap the memory R-X ?
> The crash reproduces intermittently / unpredictably, and it happens on
> unspecified hardware. Personally I couldn't reproduce it on my APM
> Mustang A3, using the exact same guest ISO.
>
> So, I'm thinking that, as soon as CoreStartImage() calls
> Image->EntryPoint(), the instruction fetch blows up, because we didn't
> clean the dcache / invalidate the icache *to PoC* in CoreLoadImage(),
> and KVM moved VCPU#0 to a different physical socket meanwhile.
>
> Am I totally lost? :)
>
> (I plan to undo commit cf580da1bc4c ("ArmPkg/ArmLib: don't invalidate
> entire I-cache on range operation", 2016-05-12) and commit b7de7e3cab3f
> ("ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU",
> 2015-12-08), and to ask the reporter to reproduce the issue like that,
> but I figured I'd ask first.)
>
> Thanks!
> Laszlo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-22 15:37 ` Ard Biesheuvel
@ 2019-01-23 9:14 ` Laszlo Ersek
2019-01-23 9:26 ` Ard Biesheuvel
0 siblings, 1 reply; 15+ messages in thread
From: Laszlo Ersek @ 2019-01-23 9:14 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Leif Lindholm, Cohen, Eugene, edk2-devel@lists.01.org, Tanxiaojun,
Marc Zyngier, Christoffer Dall
On 01/22/19 16:37, Ard Biesheuvel wrote:
> On Tue, 22 Jan 2019 at 16:09, Laszlo Ersek <lersek@redhat.com> wrote:
>>
>> Performing thread-necromancy on a patch from 2015, which is today known
>> as commit b7de7e3cab3f. Also CC'ing Christoffer and Marc.
>>
>> Question at the bottom.
>>
>> On 12/07/15 08:06, Ard Biesheuvel wrote:
>>> From: "Cohen, Eugene" <eugene@hp.com>
>>>
>>> This patch updates the ArmPkg variant of InvalidateInstructionCacheRange to
>>> flush the data cache only to the point of unification (PoU). This improves
>>> performance and also allows invalidation in scenarios where it would be
>>> inappropriate to flush to the point of coherency (like when executing code
>>> from L2 configured as cache-as-ram).
>>>
>>> Contributed-under: TianoCore Contribution Agreement 1.0
>>> Signed-off-by: Eugene Cohen <eugene@hp.com>
>>>
>>> Added AARCH64 and ARM/GCC implementations of the above.
>>>
>>> Contributed-under: TianoCore Contribution Agreement 1.0
>>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>>> ---
>>> ArmPkg/Include/Library/ArmLib.h | 8 +++++++-
>>> ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c | 2 +-
>>> ArmPkg/Library/ArmLib/AArch64/AArch64Support.S | 6 ++++++
>>> ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S | 6 ++++++
>>> ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm | 5 +++++
>>> 5 files changed, 25 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/ArmPkg/Include/Library/ArmLib.h b/ArmPkg/Include/Library/ArmLib.h
>>> index 9622444ec63f..85fa1f600ba9 100644
>>> --- a/ArmPkg/Include/Library/ArmLib.h
>>> +++ b/ArmPkg/Include/Library/ArmLib.h
>>> @@ -183,12 +183,18 @@ ArmInvalidateDataCacheEntryByMVA (
>>>
>>> VOID
>>> EFIAPI
>>> -ArmCleanDataCacheEntryByMVA (
>>> +ArmCleanDataCacheEntryToPoUByMVA(
>>> IN UINTN Address
>>> );
>>>
>>> VOID
>>> EFIAPI
>>> +ArmCleanDataCacheEntryByMVA(
>>> +IN UINTN Address
>>> +);
>>> +
>>> +VOID
>>> +EFIAPI
>>> ArmCleanInvalidateDataCacheEntryByMVA (
>>> IN UINTN Address
>>> );
>>> diff --git a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
>>> index feab4497ac1b..1045f9068f4d 100644
>>> --- a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
>>> +++ b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
>>> @@ -64,7 +64,7 @@ InvalidateInstructionCacheRange (
>>> IN UINTN Length
>>> )
>>> {
>>> - CacheRangeOperation (Address, Length, ArmCleanDataCacheEntryByMVA);
>>> + CacheRangeOperation (Address, Length, ArmCleanDataCacheEntryToPoUByMVA);
>>> ArmInvalidateInstructionCache ();
>>> return Address;
>>> }
>>> diff --git a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
>>> index c530d19e897e..db21f73f0ed7 100644
>>> --- a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
>>> +++ b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
>>> @@ -22,6 +22,7 @@
>>> GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
>>> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
>>> GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
>>> +GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
>>> GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
>>> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryBySetWay)
>>> GCC_ASM_EXPORT (ArmCleanDataCacheEntryBySetWay)
>>> @@ -72,6 +73,11 @@ ASM_PFX(ArmCleanDataCacheEntryByMVA):
>>> ret
>>>
>>>
>>> +ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
>>> + dc cvau, x0 // Clean single data cache line to PoU
>>> + ret
>>> +
>>> +
>>> ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
>>> dc civac, x0 // Clean and invalidate single data cache line
>>> ret
>>> diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
>>> index 5f030d92de31..7de1b11ef818 100644
>>> --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
>>> +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
>>> @@ -19,6 +19,7 @@
>>> GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
>>> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
>>> GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
>>> +GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
>>> GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
>>> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryBySetWay)
>>> GCC_ASM_EXPORT (ArmCleanDataCacheEntryBySetWay)
>>> @@ -69,6 +70,11 @@ ASM_PFX(ArmCleanDataCacheEntryByMVA):
>>> bx lr
>>>
>>>
>>> +ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
>>> + mcr p15, 0, r0, c7, c11, 1 @clean single data cache line to PoU
>>> + bx lr
>>> +
>>> +
>>> ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
>>> mcr p15, 0, r0, c7, c14, 1 @clean and invalidate single data cache line
>>> bx lr
>>> diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
>>> index df7e22dca2d9..a460bd2da7a9 100644
>>> --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
>>> +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
>>> @@ -34,6 +34,11 @@ CTRL_I_BIT EQU (1 << 12)
>>> bx lr
>>>
>>>
>>> + RVCT_ASM_EXPORT ArmCleanDataCacheEntryToPoUByMVA
>>> + mcr p15, 0, r0, c7, c11, 1 ; clean single data cache line to PoU
>>> + bx lr
>>> +
>>> +
>>> RVCT_ASM_EXPORT ArmCleanInvalidateDataCacheEntryByMVA
>>> mcr p15, 0, r0, c7, c14, 1 ; clean and invalidate single data cache line
>>> bx lr
>>>
>>
>> The implementation of the LoadImage boot service calls
>> InvalidateInstructionCacheRange():
>>
>> CoreLoadImage() [MdeModulePkg/Core/Dxe/Image/Image.c]
>> CoreLoadImageCommon() [MdeModulePkg/Core/Dxe/Image/Image.c]
>> CoreLoadPeImage() [MdeModulePkg/Core/Dxe/Image/Image.c]
>> InvalidateInstructionCacheRange() [ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c]
>>
>> Assuming that this code runs in a virtual machine, is any one the
>> following scenarios plausible?
>>
>> (1) the dcache is flushed to PoU on physical socket #M, and the icache
>> is invalidated to PoU on physical socket #N, M being unequal to N
>>
>> (2) the dcache cleaning + icache invalidation happen on the same
>> physical socket #M, but the image execution (via StartImage)
>> commences on socket #N
>>
>> In either case, DRAM (PoC) would not form a "bridge" between the
>> physical CPUs sitting in said sockets. If the host kernel / KVM
>> re-scheduled VCPU#0 from socket #M to socket #N in the "middle" of the
>> above scenarios (1) and (2), could that result in StartImage executing
>> *not* what LoadImage loaded?
>>
>
> Cache maintenance by virtual address to the PoU is broadcast to the
> inner shareable shareability domain. So this should be sufficient to
> ensure that the CPU executing the instruction sees the correct data.
>
> Our code uses full system barriers after each stage of cache
> maintenance, so the code looks correct to me. However, it does rely on
> the host firmware to configure the inner shareable shareability domain
> correctly (and use the correct memory attributes for all memory it
> maps)
OK, thank you.
>> Because, we got a report
>> <https://bugzilla.redhat.com/show_bug.cgi?id=1666280> -- sorry, private
>> bug :/ -- in which the *very first* instruction of grub crashes. That
>> is, the instruction that forms the entry point of the image. The symptom
>> is:
>>
>>> ESR : EC 0x21 IL 0x1 ISS 0x0000000E
>>>
>>> Instruction abort: Permission fault, second level
>>
>> If I'm parsing the ARM ARM correctly:
>>
>> - "ESR" stands for "Exception Syndrome Register".
>>
>> - "EC 0x21" stands for Exception Class 0x21, "Instruction Abort taken
>> without a change of Exception level. Used for MMU faults generated by
>> instruction accesses, and for synchronous external aborts, including
>> synchronous parity errors."
>>
>> - "ISS" stands for "Instruction Specific Syndrome".
>>
>> - "ISS 0x00E" (24 bit value), given EC 0x21, stands for "Permission
>> fault, second level", as the log itself says.
>>
>
> This appears to be a TLB maintenance issue, not a cache maintenance
> issue. Level 2 mappings are 2 MB block mappings, and so we usually
> have smaller mappings for executables (unless GRUB is much larger than
> 2 MB and it covers an entire 2 MB naturally aligned block)
>
> Does it happen in DEBUG mode?
Yes, it does, with full debug logs enabled.
> Is SetUefiImageMemoryAttributes() being
> called to remap the memory R-X ?
No, it is not; the grub binary in question doesn't have the required
section alignment (... I hope at least that that's what your question
refers to):
> ProtectUefiImageCommon - 0x3E6C54C0
> - 0x000000013BEEF000 - 0x0000000000030600
> !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
incorrect !!!!!!!!
Thank you,
Laszlo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-23 9:14 ` Laszlo Ersek
@ 2019-01-23 9:26 ` Ard Biesheuvel
2019-01-23 9:54 ` Laszlo Ersek
0 siblings, 1 reply; 15+ messages in thread
From: Ard Biesheuvel @ 2019-01-23 9:26 UTC (permalink / raw)
To: Laszlo Ersek
Cc: Leif Lindholm, Cohen, Eugene, edk2-devel@lists.01.org, Tanxiaojun,
Marc Zyngier, Christoffer Dall
On Wed, 23 Jan 2019 at 10:14, Laszlo Ersek <lersek@redhat.com> wrote:
>
> On 01/22/19 16:37, Ard Biesheuvel wrote:
> > On Tue, 22 Jan 2019 at 16:09, Laszlo Ersek <lersek@redhat.com> wrote:
> >>
> >> Performing thread-necromancy on a patch from 2015, which is today known
> >> as commit b7de7e3cab3f. Also CC'ing Christoffer and Marc.
> >>
> >> Question at the bottom.
> >>
> >> On 12/07/15 08:06, Ard Biesheuvel wrote:
> >>> From: "Cohen, Eugene" <eugene@hp.com>
> >>>
> >>> This patch updates the ArmPkg variant of InvalidateInstructionCacheRange to
> >>> flush the data cache only to the point of unification (PoU). This improves
> >>> performance and also allows invalidation in scenarios where it would be
> >>> inappropriate to flush to the point of coherency (like when executing code
> >>> from L2 configured as cache-as-ram).
> >>>
> >>> Contributed-under: TianoCore Contribution Agreement 1.0
> >>> Signed-off-by: Eugene Cohen <eugene@hp.com>
> >>>
> >>> Added AARCH64 and ARM/GCC implementations of the above.
> >>>
> >>> Contributed-under: TianoCore Contribution Agreement 1.0
> >>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >>> ---
> >>> ArmPkg/Include/Library/ArmLib.h | 8 +++++++-
> >>> ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c | 2 +-
> >>> ArmPkg/Library/ArmLib/AArch64/AArch64Support.S | 6 ++++++
> >>> ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S | 6 ++++++
> >>> ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm | 5 +++++
> >>> 5 files changed, 25 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/ArmPkg/Include/Library/ArmLib.h b/ArmPkg/Include/Library/ArmLib.h
> >>> index 9622444ec63f..85fa1f600ba9 100644
> >>> --- a/ArmPkg/Include/Library/ArmLib.h
> >>> +++ b/ArmPkg/Include/Library/ArmLib.h
> >>> @@ -183,12 +183,18 @@ ArmInvalidateDataCacheEntryByMVA (
> >>>
> >>> VOID
> >>> EFIAPI
> >>> -ArmCleanDataCacheEntryByMVA (
> >>> +ArmCleanDataCacheEntryToPoUByMVA(
> >>> IN UINTN Address
> >>> );
> >>>
> >>> VOID
> >>> EFIAPI
> >>> +ArmCleanDataCacheEntryByMVA(
> >>> +IN UINTN Address
> >>> +);
> >>> +
> >>> +VOID
> >>> +EFIAPI
> >>> ArmCleanInvalidateDataCacheEntryByMVA (
> >>> IN UINTN Address
> >>> );
> >>> diff --git a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> >>> index feab4497ac1b..1045f9068f4d 100644
> >>> --- a/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> >>> +++ b/ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c
> >>> @@ -64,7 +64,7 @@ InvalidateInstructionCacheRange (
> >>> IN UINTN Length
> >>> )
> >>> {
> >>> - CacheRangeOperation (Address, Length, ArmCleanDataCacheEntryByMVA);
> >>> + CacheRangeOperation (Address, Length, ArmCleanDataCacheEntryToPoUByMVA);
> >>> ArmInvalidateInstructionCache ();
> >>> return Address;
> >>> }
> >>> diff --git a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> >>> index c530d19e897e..db21f73f0ed7 100644
> >>> --- a/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> >>> +++ b/ArmPkg/Library/ArmLib/AArch64/AArch64Support.S
> >>> @@ -22,6 +22,7 @@
> >>> GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
> >>> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
> >>> GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
> >>> +GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
> >>> GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
> >>> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryBySetWay)
> >>> GCC_ASM_EXPORT (ArmCleanDataCacheEntryBySetWay)
> >>> @@ -72,6 +73,11 @@ ASM_PFX(ArmCleanDataCacheEntryByMVA):
> >>> ret
> >>>
> >>>
> >>> +ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
> >>> + dc cvau, x0 // Clean single data cache line to PoU
> >>> + ret
> >>> +
> >>> +
> >>> ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
> >>> dc civac, x0 // Clean and invalidate single data cache line
> >>> ret
> >>> diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> >>> index 5f030d92de31..7de1b11ef818 100644
> >>> --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> >>> +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.S
> >>> @@ -19,6 +19,7 @@
> >>> GCC_ASM_EXPORT (ArmInvalidateInstructionCache)
> >>> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryByMVA)
> >>> GCC_ASM_EXPORT (ArmCleanDataCacheEntryByMVA)
> >>> +GCC_ASM_EXPORT (ArmCleanDataCacheEntryToPoUByMVA)
> >>> GCC_ASM_EXPORT (ArmCleanInvalidateDataCacheEntryByMVA)
> >>> GCC_ASM_EXPORT (ArmInvalidateDataCacheEntryBySetWay)
> >>> GCC_ASM_EXPORT (ArmCleanDataCacheEntryBySetWay)
> >>> @@ -69,6 +70,11 @@ ASM_PFX(ArmCleanDataCacheEntryByMVA):
> >>> bx lr
> >>>
> >>>
> >>> +ASM_PFX(ArmCleanDataCacheEntryToPoUByMVA):
> >>> + mcr p15, 0, r0, c7, c11, 1 @clean single data cache line to PoU
> >>> + bx lr
> >>> +
> >>> +
> >>> ASM_PFX(ArmCleanInvalidateDataCacheEntryByMVA):
> >>> mcr p15, 0, r0, c7, c14, 1 @clean and invalidate single data cache line
> >>> bx lr
> >>> diff --git a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> >>> index df7e22dca2d9..a460bd2da7a9 100644
> >>> --- a/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> >>> +++ b/ArmPkg/Library/ArmLib/ArmV7/ArmV7Support.asm
> >>> @@ -34,6 +34,11 @@ CTRL_I_BIT EQU (1 << 12)
> >>> bx lr
> >>>
> >>>
> >>> + RVCT_ASM_EXPORT ArmCleanDataCacheEntryToPoUByMVA
> >>> + mcr p15, 0, r0, c7, c11, 1 ; clean single data cache line to PoU
> >>> + bx lr
> >>> +
> >>> +
> >>> RVCT_ASM_EXPORT ArmCleanInvalidateDataCacheEntryByMVA
> >>> mcr p15, 0, r0, c7, c14, 1 ; clean and invalidate single data cache line
> >>> bx lr
> >>>
> >>
> >> The implementation of the LoadImage boot service calls
> >> InvalidateInstructionCacheRange():
> >>
> >> CoreLoadImage() [MdeModulePkg/Core/Dxe/Image/Image.c]
> >> CoreLoadImageCommon() [MdeModulePkg/Core/Dxe/Image/Image.c]
> >> CoreLoadPeImage() [MdeModulePkg/Core/Dxe/Image/Image.c]
> >> InvalidateInstructionCacheRange() [ArmPkg/Library/ArmCacheMaintenanceLib/ArmCacheMaintenanceLib.c]
> >>
> >> Assuming that this code runs in a virtual machine, is any one the
> >> following scenarios plausible?
> >>
> >> (1) the dcache is flushed to PoU on physical socket #M, and the icache
> >> is invalidated to PoU on physical socket #N, M being unequal to N
> >>
> >> (2) the dcache cleaning + icache invalidation happen on the same
> >> physical socket #M, but the image execution (via StartImage)
> >> commences on socket #N
> >>
> >> In either case, DRAM (PoC) would not form a "bridge" between the
> >> physical CPUs sitting in said sockets. If the host kernel / KVM
> >> re-scheduled VCPU#0 from socket #M to socket #N in the "middle" of the
> >> above scenarios (1) and (2), could that result in StartImage executing
> >> *not* what LoadImage loaded?
> >>
> >
> > Cache maintenance by virtual address to the PoU is broadcast to the
> > inner shareable shareability domain. So this should be sufficient to
> > ensure that the CPU executing the instruction sees the correct data.
> >
> > Our code uses full system barriers after each stage of cache
> > maintenance, so the code looks correct to me. However, it does rely on
> > the host firmware to configure the inner shareable shareability domain
> > correctly (and use the correct memory attributes for all memory it
> > maps)
>
> OK, thank you.
>
> >> Because, we got a report
> >> <https://bugzilla.redhat.com/show_bug.cgi?id=1666280> -- sorry, private
> >> bug :/ -- in which the *very first* instruction of grub crashes. That
> >> is, the instruction that forms the entry point of the image. The symptom
> >> is:
> >>
> >>> ESR : EC 0x21 IL 0x1 ISS 0x0000000E
> >>>
> >>> Instruction abort: Permission fault, second level
> >>
> >> If I'm parsing the ARM ARM correctly:
> >>
> >> - "ESR" stands for "Exception Syndrome Register".
> >>
> >> - "EC 0x21" stands for Exception Class 0x21, "Instruction Abort taken
> >> without a change of Exception level. Used for MMU faults generated by
> >> instruction accesses, and for synchronous external aborts, including
> >> synchronous parity errors."
> >>
> >> - "ISS" stands for "Instruction Specific Syndrome".
> >>
> >> - "ISS 0x00E" (24 bit value), given EC 0x21, stands for "Permission
> >> fault, second level", as the log itself says.
> >>
> >
> > This appears to be a TLB maintenance issue, not a cache maintenance
> > issue. Level 2 mappings are 2 MB block mappings, and so we usually
> > have smaller mappings for executables (unless GRUB is much larger than
> > 2 MB and it covers an entire 2 MB naturally aligned block)
> >
> > Does it happen in DEBUG mode?
>
> Yes, it does, with full debug logs enabled.
>
> > Is SetUefiImageMemoryAttributes() being
> > called to remap the memory R-X ?
>
> No, it is not; the grub binary in question doesn't have the required
> section alignment (... I hope at least that that's what your question
> refers to):
>
> > ProtectUefiImageCommon - 0x3E6C54C0
> > - 0x000000013BEEF000 - 0x0000000000030600
> > !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
> incorrect !!!!!!!!
>
This is puzzling, given that the exact same binary works on Mustang.
So when loaded, GRUB should cover the following regions:
0x13beef0000 - 0x13bf000000 (0x11000)
0x13bf000000 - 0x13bf01f600 (0x1f600)
where neither covers a 2 MB block fully, which means that the TLB
entry that we are hitting is stale.
Since ProtectUefiImageCommon() does not do anything in this case, the
stale translation must be the result of
PcdDxeNxMemoryProtectionPolicy, which either sets the wrong
permissions for EfiLoaderCode (relying on ProtectUefiImageCommon), or
we don't flush the TLBs correctly after updating the permissions when
converting the memory from EfiConventionalMemory to EfiLoaderCode
Are you using the default value for PcdDxeNxMemoryProtectionPolicy?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-23 9:26 ` Ard Biesheuvel
@ 2019-01-23 9:54 ` Laszlo Ersek
2019-01-23 14:02 ` Ard Biesheuvel
2019-01-28 10:46 ` Mark Rutland
0 siblings, 2 replies; 15+ messages in thread
From: Laszlo Ersek @ 2019-01-23 9:54 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Leif Lindholm, Cohen, Eugene, edk2-devel@lists.01.org, Tanxiaojun,
Marc Zyngier, Christoffer Dall
On 01/23/19 10:26, Ard Biesheuvel wrote:
> On Wed, 23 Jan 2019 at 10:14, Laszlo Ersek <lersek@redhat.com> wrote:
>> On 01/22/19 16:37, Ard Biesheuvel wrote:
>>> Is SetUefiImageMemoryAttributes() being
>>> called to remap the memory R-X ?
>>
>> No, it is not; the grub binary in question doesn't have the required
>> section alignment (... I hope at least that that's what your question
>> refers to):
>>
>>> ProtectUefiImageCommon - 0x3E6C54C0
>>> - 0x000000013BEEF000 - 0x0000000000030600
>>> !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
>> incorrect !!!!!!!!
>>
>
> This is puzzling, given that the exact same binary works on Mustang.
And even on the original (unspecified) hardware, the same binary works
frequently. My understanding is that there are five VMs executing reboot
loops in parallel, on the same host, and 4 out of 5 may hit the issue in
a reasonable time period (300 reboots or so).
> So when loaded, GRUB should cover the following regions:
>
> 0x13beef0000 - 0x13bf000000 (0x11000)
> 0x13bf000000 - 0x13bf01f600 (0x1f600)
>
> where neither covers a 2 MB block fully, which means that the TLB
> entry that we are hitting is stale.
>
> Since ProtectUefiImageCommon() does not do anything in this case, the
> stale translation must be the result of
> PcdDxeNxMemoryProtectionPolicy, which either sets the wrong
> permissions for EfiLoaderCode (relying on ProtectUefiImageCommon), or
> we don't flush the TLBs correctly after updating the permissions when
> converting the memory from EfiConventionalMemory to EfiLoaderCode
>
> Are you using the default value for PcdDxeNxMemoryProtectionPolicy?
Yes, we have
ArmVirtPkg/ArmVirt.dsc.inc:
gEfiMdeModulePkgTokenSpaceGuid.PcdDxeNxMemoryProtectionPolicy|0xC000000000007FD1
from commit 1acd7c54a724 ("ArmVirtPkg AARCH64: enable NX memory
protection for all platforms", 2017-03-01).
The binary is from the RPM
"edk2-aarch64-20180508gitee3198e672e2-5.el8+1789+f0947240.noarch", which
is basically upstream ee3198e672e2 plus a small number of backports and
downstream customizations.
Thanks!
Laszlo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-23 9:54 ` Laszlo Ersek
@ 2019-01-23 14:02 ` Ard Biesheuvel
2019-01-23 23:04 ` Laszlo Ersek
2019-01-28 10:23 ` Mark Rutland
2019-01-28 10:46 ` Mark Rutland
1 sibling, 2 replies; 15+ messages in thread
From: Ard Biesheuvel @ 2019-01-23 14:02 UTC (permalink / raw)
To: Laszlo Ersek
Cc: Leif Lindholm, Cohen, Eugene, edk2-devel@lists.01.org, Tanxiaojun,
Marc Zyngier, Christoffer Dall
On Wed, 23 Jan 2019 at 10:55, Laszlo Ersek <lersek@redhat.com> wrote:
>
> On 01/23/19 10:26, Ard Biesheuvel wrote:
> > On Wed, 23 Jan 2019 at 10:14, Laszlo Ersek <lersek@redhat.com> wrote:
> >> On 01/22/19 16:37, Ard Biesheuvel wrote:
>
> >>> Is SetUefiImageMemoryAttributes() being
> >>> called to remap the memory R-X ?
> >>
> >> No, it is not; the grub binary in question doesn't have the required
> >> section alignment (... I hope at least that that's what your question
> >> refers to):
> >>
> >>> ProtectUefiImageCommon - 0x3E6C54C0
> >>> - 0x000000013BEEF000 - 0x0000000000030600
> >>> !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
> >> incorrect !!!!!!!!
> >>
> >
> > This is puzzling, given that the exact same binary works on Mustang.
>
> And even on the original (unspecified) hardware, the same binary works
> frequently. My understanding is that there are five VMs executing reboot
> loops in parallel, on the same host, and 4 out of 5 may hit the issue in
> a reasonable time period (300 reboots or so).
>
> > So when loaded, GRUB should cover the following regions:
> >
> > 0x13beef0000 - 0x13bf000000 (0x11000)
> > 0x13bf000000 - 0x13bf01f600 (0x1f600)
> >
> > where neither covers a 2 MB block fully, which means that the TLB
> > entry that we are hitting is stale.
> >
> > Since ProtectUefiImageCommon() does not do anything in this case, the
> > stale translation must be the result of
> > PcdDxeNxMemoryProtectionPolicy, which either sets the wrong
> > permissions for EfiLoaderCode (relying on ProtectUefiImageCommon), or
> > we don't flush the TLBs correctly after updating the permissions when
> > converting the memory from EfiConventionalMemory to EfiLoaderCode
> >
> > Are you using the default value for PcdDxeNxMemoryProtectionPolicy?
>
> Yes, we have
>
> ArmVirtPkg/ArmVirt.dsc.inc:
> gEfiMdeModulePkgTokenSpaceGuid.PcdDxeNxMemoryProtectionPolicy|0xC000000000007FD1
>
> from commit 1acd7c54a724 ("ArmVirtPkg AARCH64: enable NX memory
> protection for all platforms", 2017-03-01).
>
> The binary is from the RPM
> "edk2-aarch64-20180508gitee3198e672e2-5.el8+1789+f0947240.noarch", which
> is basically upstream ee3198e672e2 plus a small number of backports and
> downstream customizations.
>
This might help:
diff --git a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
index b7173e00b039..4c0b4b4efbd5 100644
--- a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
+++ b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
@@ -138,7 +138,7 @@ ASM_FUNC(ArmUpdateTranslationTableEntry)
ASM_FUNC(ArmInvalidateTlb)
EL1_OR_EL2_OR_EL3(x0)
-1: tlbi vmalle1
+1: tlbi vmalle1is
b 4f
2: tlbi alle2
b 4f
diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
index 90192df24f55..d54b1c19accf 100644
--- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
+++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
@@ -34,7 +34,7 @@
// flush the TLBs
.if \el == 1
- tlbi vmalle1
+ tlbi vmalle1is
.else
tlbi alle\el
.endif
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-23 14:02 ` Ard Biesheuvel
@ 2019-01-23 23:04 ` Laszlo Ersek
2019-01-28 10:23 ` Mark Rutland
1 sibling, 0 replies; 15+ messages in thread
From: Laszlo Ersek @ 2019-01-23 23:04 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Leif Lindholm, Cohen, Eugene, edk2-devel@lists.01.org, Tanxiaojun,
Marc Zyngier, Christoffer Dall
On 01/23/19 15:02, Ard Biesheuvel wrote:
> On Wed, 23 Jan 2019 at 10:55, Laszlo Ersek <lersek@redhat.com> wrote:
>>
>> On 01/23/19 10:26, Ard Biesheuvel wrote:
>>> On Wed, 23 Jan 2019 at 10:14, Laszlo Ersek <lersek@redhat.com> wrote:
>>>> On 01/22/19 16:37, Ard Biesheuvel wrote:
>>
>>>>> Is SetUefiImageMemoryAttributes() being
>>>>> called to remap the memory R-X ?
>>>>
>>>> No, it is not; the grub binary in question doesn't have the required
>>>> section alignment (... I hope at least that that's what your question
>>>> refers to):
>>>>
>>>>> ProtectUefiImageCommon - 0x3E6C54C0
>>>>> - 0x000000013BEEF000 - 0x0000000000030600
>>>>> !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
>>>> incorrect !!!!!!!!
>>>>
>>>
>>> This is puzzling, given that the exact same binary works on Mustang.
>>
>> And even on the original (unspecified) hardware, the same binary works
>> frequently. My understanding is that there are five VMs executing reboot
>> loops in parallel, on the same host, and 4 out of 5 may hit the issue in
>> a reasonable time period (300 reboots or so).
>>
>>> So when loaded, GRUB should cover the following regions:
>>>
>>> 0x13beef0000 - 0x13bf000000 (0x11000)
>>> 0x13bf000000 - 0x13bf01f600 (0x1f600)
>>>
>>> where neither covers a 2 MB block fully, which means that the TLB
>>> entry that we are hitting is stale.
>>>
>>> Since ProtectUefiImageCommon() does not do anything in this case, the
>>> stale translation must be the result of
>>> PcdDxeNxMemoryProtectionPolicy, which either sets the wrong
>>> permissions for EfiLoaderCode (relying on ProtectUefiImageCommon), or
>>> we don't flush the TLBs correctly after updating the permissions when
>>> converting the memory from EfiConventionalMemory to EfiLoaderCode
>>>
>>> Are you using the default value for PcdDxeNxMemoryProtectionPolicy?
>>
>> Yes, we have
>>
>> ArmVirtPkg/ArmVirt.dsc.inc:
>> gEfiMdeModulePkgTokenSpaceGuid.PcdDxeNxMemoryProtectionPolicy|0xC000000000007FD1
>>
>> from commit 1acd7c54a724 ("ArmVirtPkg AARCH64: enable NX memory
>> protection for all platforms", 2017-03-01).
>>
>> The binary is from the RPM
>> "edk2-aarch64-20180508gitee3198e672e2-5.el8+1789+f0947240.noarch", which
>> is basically upstream ee3198e672e2 plus a small number of backports and
>> downstream customizations.
>>
>
> This might help:
>
> diff --git a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> index b7173e00b039..4c0b4b4efbd5 100644
> --- a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> +++ b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> @@ -138,7 +138,7 @@ ASM_FUNC(ArmUpdateTranslationTableEntry)
>
> ASM_FUNC(ArmInvalidateTlb)
> EL1_OR_EL2_OR_EL3(x0)
> -1: tlbi vmalle1
> +1: tlbi vmalle1is
> b 4f
> 2: tlbi alle2
> b 4f
> diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> index 90192df24f55..d54b1c19accf 100644
> --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> @@ -34,7 +34,7 @@
>
> // flush the TLBs
> .if \el == 1
> - tlbi vmalle1
> + tlbi vmalle1is
> .else
> tlbi alle\el
> .endif
>
"Invalidate all EL1&0 regime stage 1 TLB entries for the current VMID
>>>on all PEs in the same Inner Shareable domain<<<".
I'll soon try to build a new pkg with this applied, for testing by the
reporter. Many thanks!
Laszlo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-23 14:02 ` Ard Biesheuvel
2019-01-23 23:04 ` Laszlo Ersek
@ 2019-01-28 10:23 ` Mark Rutland
2019-01-28 10:27 ` Ard Biesheuvel
1 sibling, 1 reply; 15+ messages in thread
From: Mark Rutland @ 2019-01-28 10:23 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Laszlo Ersek, Marc Zyngier, edk2-devel@lists.01.org,
Christoffer Dall, Tanxiaojun
On Wed, Jan 23, 2019 at 03:02:14PM +0100, Ard Biesheuvel wrote:
> On Wed, 23 Jan 2019 at 10:55, Laszlo Ersek <lersek@redhat.com> wrote:
> >
> > On 01/23/19 10:26, Ard Biesheuvel wrote:
> > > On Wed, 23 Jan 2019 at 10:14, Laszlo Ersek <lersek@redhat.com> wrote:
> > >> On 01/22/19 16:37, Ard Biesheuvel wrote:
> >
> > >>> Is SetUefiImageMemoryAttributes() being
> > >>> called to remap the memory R-X ?
> > >>
> > >> No, it is not; the grub binary in question doesn't have the required
> > >> section alignment (... I hope at least that that's what your question
> > >> refers to):
> > >>
> > >>> ProtectUefiImageCommon - 0x3E6C54C0
> > >>> - 0x000000013BEEF000 - 0x0000000000030600
> > >>> !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
> > >> incorrect !!!!!!!!
> > >>
> > >
> > > This is puzzling, given that the exact same binary works on Mustang.
> >
> > And even on the original (unspecified) hardware, the same binary works
> > frequently. My understanding is that there are five VMs executing reboot
> > loops in parallel, on the same host, and 4 out of 5 may hit the issue in
> > a reasonable time period (300 reboots or so).
> >
> > > So when loaded, GRUB should cover the following regions:
> > >
> > > 0x13beef0000 - 0x13bf000000 (0x11000)
> > > 0x13bf000000 - 0x13bf01f600 (0x1f600)
> > >
> > > where neither covers a 2 MB block fully, which means that the TLB
> > > entry that we are hitting is stale.
> > >
> > > Since ProtectUefiImageCommon() does not do anything in this case, the
> > > stale translation must be the result of
> > > PcdDxeNxMemoryProtectionPolicy, which either sets the wrong
> > > permissions for EfiLoaderCode (relying on ProtectUefiImageCommon), or
> > > we don't flush the TLBs correctly after updating the permissions when
> > > converting the memory from EfiConventionalMemory to EfiLoaderCode
> > >
> > > Are you using the default value for PcdDxeNxMemoryProtectionPolicy?
> >
> > Yes, we have
> >
> > ArmVirtPkg/ArmVirt.dsc.inc:
> > gEfiMdeModulePkgTokenSpaceGuid.PcdDxeNxMemoryProtectionPolicy|0xC000000000007FD1
> >
> > from commit 1acd7c54a724 ("ArmVirtPkg AARCH64: enable NX memory
> > protection for all platforms", 2017-03-01).
> >
> > The binary is from the RPM
> > "edk2-aarch64-20180508gitee3198e672e2-5.el8+1789+f0947240.noarch", which
> > is basically upstream ee3198e672e2 plus a small number of backports and
> > downstream customizations.
> >
>
> This might help:
>
> diff --git a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> index b7173e00b039..4c0b4b4efbd5 100644
> --- a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> +++ b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> @@ -138,7 +138,7 @@ ASM_FUNC(ArmUpdateTranslationTableEntry)
>
> ASM_FUNC(ArmInvalidateTlb)
> EL1_OR_EL2_OR_EL3(x0)
> -1: tlbi vmalle1
> +1: tlbi vmalle1is
> b 4f
> 2: tlbi alle2
> b 4f
> diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> index 90192df24f55..d54b1c19accf 100644
> --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> @@ -34,7 +34,7 @@
>
> // flush the TLBs
> .if \el == 1
> - tlbi vmalle1
> + tlbi vmalle1is
> .else
> tlbi alle\el
> .endif
Assuming that hardware is working correctly, this change shouldn't be
necessary.
KVM sets HCR_EL2.FB, so all TLBI ops will behave as their *IS variant.
Likewise it sets HCR_EL2.BSU, so barriers apply to the inner shareable domain too.
On bare-metal, NSH should be sufficient.
Thanks,
Mark.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-28 10:23 ` Mark Rutland
@ 2019-01-28 10:27 ` Ard Biesheuvel
0 siblings, 0 replies; 15+ messages in thread
From: Ard Biesheuvel @ 2019-01-28 10:27 UTC (permalink / raw)
To: Mark Rutland
Cc: Laszlo Ersek, Marc Zyngier, edk2-devel@lists.01.org,
Christoffer Dall, Tanxiaojun
On Mon, 28 Jan 2019 at 11:23, Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Wed, Jan 23, 2019 at 03:02:14PM +0100, Ard Biesheuvel wrote:
> > On Wed, 23 Jan 2019 at 10:55, Laszlo Ersek <lersek@redhat.com> wrote:
> > >
> > > On 01/23/19 10:26, Ard Biesheuvel wrote:
> > > > On Wed, 23 Jan 2019 at 10:14, Laszlo Ersek <lersek@redhat.com> wrote:
> > > >> On 01/22/19 16:37, Ard Biesheuvel wrote:
> > >
> > > >>> Is SetUefiImageMemoryAttributes() being
> > > >>> called to remap the memory R-X ?
> > > >>
> > > >> No, it is not; the grub binary in question doesn't have the required
> > > >> section alignment (... I hope at least that that's what your question
> > > >> refers to):
> > > >>
> > > >>> ProtectUefiImageCommon - 0x3E6C54C0
> > > >>> - 0x000000013BEEF000 - 0x0000000000030600
> > > >>> !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
> > > >> incorrect !!!!!!!!
> > > >>
> > > >
> > > > This is puzzling, given that the exact same binary works on Mustang.
> > >
> > > And even on the original (unspecified) hardware, the same binary works
> > > frequently. My understanding is that there are five VMs executing reboot
> > > loops in parallel, on the same host, and 4 out of 5 may hit the issue in
> > > a reasonable time period (300 reboots or so).
> > >
> > > > So when loaded, GRUB should cover the following regions:
> > > >
> > > > 0x13beef0000 - 0x13bf000000 (0x11000)
> > > > 0x13bf000000 - 0x13bf01f600 (0x1f600)
> > > >
> > > > where neither covers a 2 MB block fully, which means that the TLB
> > > > entry that we are hitting is stale.
> > > >
> > > > Since ProtectUefiImageCommon() does not do anything in this case, the
> > > > stale translation must be the result of
> > > > PcdDxeNxMemoryProtectionPolicy, which either sets the wrong
> > > > permissions for EfiLoaderCode (relying on ProtectUefiImageCommon), or
> > > > we don't flush the TLBs correctly after updating the permissions when
> > > > converting the memory from EfiConventionalMemory to EfiLoaderCode
> > > >
> > > > Are you using the default value for PcdDxeNxMemoryProtectionPolicy?
> > >
> > > Yes, we have
> > >
> > > ArmVirtPkg/ArmVirt.dsc.inc:
> > > gEfiMdeModulePkgTokenSpaceGuid.PcdDxeNxMemoryProtectionPolicy|0xC000000000007FD1
> > >
> > > from commit 1acd7c54a724 ("ArmVirtPkg AARCH64: enable NX memory
> > > protection for all platforms", 2017-03-01).
> > >
> > > The binary is from the RPM
> > > "edk2-aarch64-20180508gitee3198e672e2-5.el8+1789+f0947240.noarch", which
> > > is basically upstream ee3198e672e2 plus a small number of backports and
> > > downstream customizations.
> > >
> >
> > This might help:
> >
> > diff --git a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> > b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> > index b7173e00b039..4c0b4b4efbd5 100644
> > --- a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> > +++ b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> > @@ -138,7 +138,7 @@ ASM_FUNC(ArmUpdateTranslationTableEntry)
> >
> > ASM_FUNC(ArmInvalidateTlb)
> > EL1_OR_EL2_OR_EL3(x0)
> > -1: tlbi vmalle1
> > +1: tlbi vmalle1is
> > b 4f
> > 2: tlbi alle2
> > b 4f
> > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> > b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> > index 90192df24f55..d54b1c19accf 100644
> > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> > @@ -34,7 +34,7 @@
> >
> > // flush the TLBs
> > .if \el == 1
> > - tlbi vmalle1
> > + tlbi vmalle1is
> > .else
> > tlbi alle\el
> > .endif
>
> Assuming that hardware is working correctly, this change shouldn't be
> necessary.
>
> KVM sets HCR_EL2.FB, so all TLBI ops will behave as their *IS variant.
> Likewise it sets HCR_EL2.BSU, so barriers apply to the inner shareable domain too.
>
> On bare-metal, NSH should be sufficient.
>
Ah wonderful, thanks for clarifying.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-23 9:54 ` Laszlo Ersek
2019-01-23 14:02 ` Ard Biesheuvel
@ 2019-01-28 10:46 ` Mark Rutland
2019-01-28 11:54 ` Laszlo Ersek
1 sibling, 1 reply; 15+ messages in thread
From: Mark Rutland @ 2019-01-28 10:46 UTC (permalink / raw)
To: Laszlo Ersek
Cc: Ard Biesheuvel, Marc Zyngier, edk2-devel@lists.01.org,
Christoffer Dall, Tanxiaojun
On Wed, Jan 23, 2019 at 10:54:56AM +0100, Laszlo Ersek wrote:
> On 01/23/19 10:26, Ard Biesheuvel wrote:
> > On Wed, 23 Jan 2019 at 10:14, Laszlo Ersek <lersek@redhat.com> wrote:
> >> On 01/22/19 16:37, Ard Biesheuvel wrote:
>
> >>> Is SetUefiImageMemoryAttributes() being
> >>> called to remap the memory R-X ?
> >>
> >> No, it is not; the grub binary in question doesn't have the required
> >> section alignment (... I hope at least that that's what your question
> >> refers to):
> >>
> >>> ProtectUefiImageCommon - 0x3E6C54C0
> >>> - 0x000000013BEEF000 - 0x0000000000030600
> >>> !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
> >> incorrect !!!!!!!!
> >
> > This is puzzling, given that the exact same binary works on Mustang.
>
> And even on the original (unspecified) hardware, the same binary works
> frequently. My understanding is that there are five VMs executing reboot
> loops in parallel, on the same host, and 4 out of 5 may hit the issue in
> a reasonable time period (300 reboots or so).
Interesting.
Do you happen to know how many VMID bits the host has? If it has an 8-bit VMID,
this could be indicative of some problem upon overflow.
Can you point us at the host kernel?
Thanks,
Mark.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
2019-01-28 10:46 ` Mark Rutland
@ 2019-01-28 11:54 ` Laszlo Ersek
[not found] ` <5C4EFF06.2050600@huawei.com>
0 siblings, 1 reply; 15+ messages in thread
From: Laszlo Ersek @ 2019-01-28 11:54 UTC (permalink / raw)
To: Mark Rutland
Cc: Ard Biesheuvel, Marc Zyngier, edk2-devel@lists.01.org,
Christoffer Dall, Tanxiaojun
On 01/28/19 11:46, Mark Rutland wrote:
> On Wed, Jan 23, 2019 at 10:54:56AM +0100, Laszlo Ersek wrote:
>> On 01/23/19 10:26, Ard Biesheuvel wrote:
>>> On Wed, 23 Jan 2019 at 10:14, Laszlo Ersek <lersek@redhat.com> wrote:
>>>> On 01/22/19 16:37, Ard Biesheuvel wrote:
>>
>>>>> Is SetUefiImageMemoryAttributes() being
>>>>> called to remap the memory R-X ?
>>>>
>>>> No, it is not; the grub binary in question doesn't have the required
>>>> section alignment (... I hope at least that that's what your question
>>>> refers to):
>>>>
>>>>> ProtectUefiImageCommon - 0x3E6C54C0
>>>>> - 0x000000013BEEF000 - 0x0000000000030600
>>>>> !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
>>>> incorrect !!!!!!!!
>>>
>>> This is puzzling, given that the exact same binary works on Mustang.
>>
>> And even on the original (unspecified) hardware, the same binary works
>> frequently. My understanding is that there are five VMs executing reboot
>> loops in parallel, on the same host, and 4 out of 5 may hit the issue in
>> a reasonable time period (300 reboots or so).
>
> Interesting.
>
> Do you happen to know how many VMID bits the host has? If it has an 8-bit VMID,
> this could be indicative of some problem upon overflow.
I'll let Tan Xiaojun (CC'd) answer this questions.
> Can you point us at the host kernel?
In the report, Tan Xiaojun wrote "4.18.0-48.el8.aarch64"; I guess that
information is mostly useless in an upstream discussion. Unfortunately,
I couldn't reproduce the symptom at all (I know nothing about the
hardware in question), so I can't myself retest with an upstream host
kernel.
Thank you!
Laszlo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
[not found] ` <5C4EFF06.2050600@huawei.com>
@ 2019-01-28 13:46 ` Mark Rutland
[not found] ` <5C4FF71B.1060606@huawei.com>
2019-01-28 15:01 ` Laszlo Ersek
1 sibling, 1 reply; 15+ messages in thread
From: Mark Rutland @ 2019-01-28 13:46 UTC (permalink / raw)
To: Tan Xiaojun
Cc: Laszlo Ersek, Ard Biesheuvel, Marc Zyngier,
edk2-devel@lists.01.org, Christoffer Dall
On Mon, Jan 28, 2019 at 09:09:26PM +0800, Tan Xiaojun wrote:
> On 2019/1/28 19:54, Laszlo Ersek wrote:
> > On 01/28/19 11:46, Mark Rutland wrote:
> >> On Wed, Jan 23, 2019 at 10:54:56AM +0100, Laszlo Ersek wrote:
> >>> And even on the original (unspecified) hardware, the same binary works
> >>> frequently. My understanding is that there are five VMs executing reboot
> >>> loops in parallel, on the same host, and 4 out of 5 may hit the issue in
> >>> a reasonable time period (300 reboots or so).
> >>
> >> Interesting.
> >>
> >> Do you happen to know how many VMID bits the host has? If it has an 8-bit VMID,
> >> this could be indicative of some problem upon overflow.
> >
> > I'll let Tan Xiaojun (CC'd) answer this questions.
> >
> >> Can you point us at the host kernel?
> >
> > In the report, Tan Xiaojun wrote "4.18.0-48.el8.aarch64"; I guess that
> > information is mostly useless in an upstream discussion. Unfortunately,
> > I couldn't reproduce the symptom at all (I know nothing about the
> > hardware in question), so I can't myself retest with an upstream host
> > kernel.
>
> I don't understand, what do you want me to do? What is the specific problem?
Could you let us know which CPU/system you've seen this issue with?
... and what the value of ID_AA64MMFR1_EL1.VMIDBits is?
Thanks,
Mark.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
[not found] ` <5C4EFF06.2050600@huawei.com>
2019-01-28 13:46 ` Mark Rutland
@ 2019-01-28 15:01 ` Laszlo Ersek
1 sibling, 0 replies; 15+ messages in thread
From: Laszlo Ersek @ 2019-01-28 15:01 UTC (permalink / raw)
To: Tan Xiaojun
Cc: Mark Rutland, Ard Biesheuvel, Marc Zyngier,
edk2-devel@lists.01.org, Christoffer Dall
On 01/28/19 14:09, Tan Xiaojun wrote:
> On 2019/1/28 19:54, Laszlo Ersek wrote:
>> On 01/28/19 11:46, Mark Rutland wrote:
>>> On Wed, Jan 23, 2019 at 10:54:56AM +0100, Laszlo Ersek wrote:
>>>> On 01/23/19 10:26, Ard Biesheuvel wrote:
>>>>> On Wed, 23 Jan 2019 at 10:14, Laszlo Ersek <lersek@redhat.com> wrote:
>>>>>> On 01/22/19 16:37, Ard Biesheuvel wrote:
>>>>
>>>>>>> Is SetUefiImageMemoryAttributes() being
>>>>>>> called to remap the memory R-X ?
>>>>>>
>>>>>> No, it is not; the grub binary in question doesn't have the required
>>>>>> section alignment (... I hope at least that that's what your question
>>>>>> refers to):
>>>>>>
>>>>>>> ProtectUefiImageCommon - 0x3E6C54C0
>>>>>>> - 0x000000013BEEF000 - 0x0000000000030600
>>>>>>> !!!!!!!! ProtectUefiImageCommon - Section Alignment(0x200) is
>>>>>> incorrect !!!!!!!!
>>>>>
>>>>> This is puzzling, given that the exact same binary works on Mustang.
>>>>
>>>> And even on the original (unspecified) hardware, the same binary works
>>>> frequently. My understanding is that there are five VMs executing reboot
>>>> loops in parallel, on the same host, and 4 out of 5 may hit the issue in
>>>> a reasonable time period (300 reboots or so).
>>>
>>> Interesting.
>>>
>>> Do you happen to know how many VMID bits the host has? If it has an 8-bit VMID,
>>> this could be indicative of some problem upon overflow.
>>
>> I'll let Tan Xiaojun (CC'd) answer this questions.
>>
>>> Can you point us at the host kernel?
>>
>> In the report, Tan Xiaojun wrote "4.18.0-48.el8.aarch64"; I guess that
>> information is mostly useless in an upstream discussion. Unfortunately,
>> I couldn't reproduce the symptom at all (I know nothing about the
>> hardware in question), so I can't myself retest with an upstream host
>> kernel.
>>
>
> I don't understand, what do you want me to do? What is the specific problem?
Sorry, I was unclear. Primarily, please see Mark's explanation.
Secondarily, my point was that the upstream community could help more if
the symptom reproduced on a pristine upstream host kernel. Given that I
don't have access to your hardware that presents the symptom, plus that
the symptom doesn't reproduce on hardware that I do have access to,
using the downstream kernel that you reported, only you can attempt to
repro the issue with an upstream kernel (and then please report the
findings here).
Thanks,
Laszlo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU
[not found] ` <5C5036DF.9060905@hisilicon.com>
@ 2019-01-29 13:23 ` Laszlo Ersek
0 siblings, 0 replies; 15+ messages in thread
From: Laszlo Ersek @ 2019-01-29 13:23 UTC (permalink / raw)
To: xinliang, Tan Xiaojun
Cc: Mark Rutland, Ard Biesheuvel, Marc Zyngier,
edk2-devel@lists.01.org, Christoffer Dall
On 01/29/19 12:19, xinliang wrote:
>
>
> On 2019/1/29 14:47, Tan Xiaojun wrote:
>> +cc Xinliang
>>
>> Sorry, I didn't react to what you discussed at the beginning, because
>> this
>> problem is mainly handled by Xinliang.
>>
>> The host we used for testing is Huawei D06 (AArch64), and its CPU chip is
>> 1620 (self-developed chip that follows the arm v8.2 specification).
>> Its vmid
>> is 16 bit.
>>
>> Thanks.
>> Xiaojun.
>>
>> On 2019/1/28 21:46, Mark Rutland wrote:
>>> On Mon, Jan 28, 2019 at 09:09:26PM +0800, Tan Xiaojun wrote:
>>>> On 2019/1/28 19:54, Laszlo Ersek wrote:
>>>>> On 01/28/19 11:46, Mark Rutland wrote:
>>>>>> On Wed, Jan 23, 2019 at 10:54:56AM +0100, Laszlo Ersek wrote:
>>>>>>> And even on the original (unspecified) hardware, the same binary
>>>>>>> works
>>>>>>> frequently. My understanding is that there are five VMs executing
>>>>>>> reboot
>>>>>>> loops in parallel, on the same host, and 4 out of 5 may hit the
>>>>>>> issue in
>>>>>>> a reasonable time period (300 reboots or so).
>>>>>> Interesting.
>>>>>>
>>>>>> Do you happen to know how many VMID bits the host has? If it has
>>>>>> an 8-bit VMID,
>>>>>> this could be indicative of some problem upon overflow.
>>>>> I'll let Tan Xiaojun (CC'd) answer this questions.
>>>>>
>>>>>> Can you point us at the host kernel?
>>>>> In the report, Tan Xiaojun wrote "4.18.0-48.el8.aarch64"; I guess that
>>>>> information is mostly useless in an upstream discussion.
>>>>> Unfortunately,
>>>>> I couldn't reproduce the symptom at all (I know nothing about the
>>>>> hardware in question), so I can't myself retest with an upstream host
>>>>> kernel.
>>>> I don't understand, what do you want me to do? What is the specific
>>>> problem?
>>> Could you let us know which CPU/system you've seen this issue with?
>>>
>>> ... and what the value of ID_AA64MMFR1_EL1.VMIDBits is?
>
> Hi, our hardware is D06 board which compatible with armv8.1. So it
> should be 16-bit VMID.
>
> BTW, we now only reproduce this issue on guest without shim, such as
> suse SLE15-SP1 guest which is easy to reproduce. This founding not sure
> could help for this issue.
Unfortunately, it doesn't help without access to the host -- I used the
exact same guest installer ISO from the original report, and failed to
reproduce the symptom.
... Since we've been discussing this for a while in public too, can
someone from Huawei please authorize Red Hat to open up
<https://bugzilla.redhat.com/show_bug.cgi?id=1666280> to the public? If
you agree, please comment so on the BZ itself.
Thanks!
Laszlo
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2019-01-29 13:23 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1449471969-16949-1-git-send-email-ard.biesheuvel@linaro.org>
2019-01-22 15:09 ` [PATCH] ArmPkg: update InvalidateInstructionCacheRange to flush only to PoU Laszlo Ersek
2019-01-22 15:33 ` Laszlo Ersek
2019-01-22 15:37 ` Ard Biesheuvel
2019-01-23 9:14 ` Laszlo Ersek
2019-01-23 9:26 ` Ard Biesheuvel
2019-01-23 9:54 ` Laszlo Ersek
2019-01-23 14:02 ` Ard Biesheuvel
2019-01-23 23:04 ` Laszlo Ersek
2019-01-28 10:23 ` Mark Rutland
2019-01-28 10:27 ` Ard Biesheuvel
2019-01-28 10:46 ` Mark Rutland
2019-01-28 11:54 ` Laszlo Ersek
[not found] ` <5C4EFF06.2050600@huawei.com>
2019-01-28 13:46 ` Mark Rutland
[not found] ` <5C4FF71B.1060606@huawei.com>
[not found] ` <5C5036DF.9060905@hisilicon.com>
2019-01-29 13:23 ` Laszlo Ersek
2019-01-28 15:01 ` Laszlo Ersek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox