* [PATCH 1/5] ArmPkg/ArmMmuLib AARCH64: fix out of bounds access
2019-01-07 7:14 [PATCH 0/5] memory/MMU hardening for AArch64 Ard Biesheuvel
@ 2019-01-07 7:15 ` Ard Biesheuvel
2019-01-14 12:00 ` Leif Lindholm
2019-01-07 7:15 ` [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation Ard Biesheuvel
` (3 subsequent siblings)
4 siblings, 1 reply; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-07 7:15 UTC (permalink / raw)
To: edk2-devel
Take care not to dereference BlockEntry if it may be pointing past
the end of the page table we are manipulating. It is only a read,
and thus harmless, but HeapGuard triggers on it so let's fix it.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
index e41044142ef4..d66df3e17a02 100644
--- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
+++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
@@ -382,7 +382,7 @@ UpdateRegionMapping (
// Break the inner loop when next block is a table
// Rerun GetBlockEntryListFromAddress to avoid page table memory leak
- if (TableLevel != 3 &&
+ if (TableLevel != 3 && BlockEntry <= LastBlockEntry &&
(*BlockEntry & TT_TYPE_MASK) == TT_TYPE_TABLE_ENTRY) {
break;
}
--
2.20.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 1/5] ArmPkg/ArmMmuLib AARCH64: fix out of bounds access
2019-01-07 7:15 ` [PATCH 1/5] ArmPkg/ArmMmuLib AARCH64: fix out of bounds access Ard Biesheuvel
@ 2019-01-14 12:00 ` Leif Lindholm
2019-01-14 18:48 ` Ard Biesheuvel
0 siblings, 1 reply; 19+ messages in thread
From: Leif Lindholm @ 2019-01-14 12:00 UTC (permalink / raw)
To: Ard Biesheuvel; +Cc: edk2-devel
On Mon, Jan 07, 2019 at 08:15:00AM +0100, Ard Biesheuvel wrote:
> Take care not to dereference BlockEntry if it may be pointing past
> the end of the page table we are manipulating. It is only a read,
> and thus harmless, but HeapGuard triggers on it so let's fix it.
>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
> ---
> ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> index e41044142ef4..d66df3e17a02 100644
> --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> @@ -382,7 +382,7 @@ UpdateRegionMapping (
>
> // Break the inner loop when next block is a table
> // Rerun GetBlockEntryListFromAddress to avoid page table memory leak
> - if (TableLevel != 3 &&
> + if (TableLevel != 3 && BlockEntry <= LastBlockEntry &&
> (*BlockEntry & TT_TYPE_MASK) == TT_TYPE_TABLE_ENTRY) {
> break;
> }
> --
> 2.20.1
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/5] ArmPkg/ArmMmuLib AARCH64: fix out of bounds access
2019-01-14 12:00 ` Leif Lindholm
@ 2019-01-14 18:48 ` Ard Biesheuvel
0 siblings, 0 replies; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-14 18:48 UTC (permalink / raw)
To: Leif Lindholm; +Cc: edk2-devel@lists.01.org
On Mon, 14 Jan 2019 at 13:00, Leif Lindholm <leif.lindholm@linaro.org> wrote:
>
> On Mon, Jan 07, 2019 at 08:15:00AM +0100, Ard Biesheuvel wrote:
> > Take care not to dereference BlockEntry if it may be pointing past
> > the end of the page table we are manipulating. It is only a read,
> > and thus harmless, but HeapGuard triggers on it so let's fix it.
> >
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>
> Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
>
Thanks
Pushed as d08575759e5a..76c23f9e0d0d
> > ---
> > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > index e41044142ef4..d66df3e17a02 100644
> > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > @@ -382,7 +382,7 @@ UpdateRegionMapping (
> >
> > // Break the inner loop when next block is a table
> > // Rerun GetBlockEntryListFromAddress to avoid page table memory leak
> > - if (TableLevel != 3 &&
> > + if (TableLevel != 3 && BlockEntry <= LastBlockEntry &&
> > (*BlockEntry & TT_TYPE_MASK) == TT_TYPE_TABLE_ENTRY) {
> > break;
> > }
> > --
> > 2.20.1
> >
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
2019-01-07 7:14 [PATCH 0/5] memory/MMU hardening for AArch64 Ard Biesheuvel
2019-01-07 7:15 ` [PATCH 1/5] ArmPkg/ArmMmuLib AARCH64: fix out of bounds access Ard Biesheuvel
@ 2019-01-07 7:15 ` Ard Biesheuvel
2019-01-23 15:46 ` Leif Lindholm
2019-01-07 7:15 ` [PATCH 3/5] ArmPkg/ArmMmuLib AARCH64: implement support for EFI_MEMORY_RP permissions Ard Biesheuvel
` (2 subsequent siblings)
4 siblings, 1 reply; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-07 7:15 UTC (permalink / raw)
To: edk2-devel
Currently, we always invalidate the TLBs entirely after making
any modification to the page tables. Now that we have introduced
strict memory permissions in quite a number of places, such
modifications occur much more often, and it is better for performance
to flush only those TLB entries that are actually affected by
the changes.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
ArmPkg/Include/Library/ArmMmuLib.h | 3 ++-
ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S | 6 +++---
ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 16 +++++++---------
ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S | 14 ++++++++------
4 files changed, 20 insertions(+), 19 deletions(-)
diff --git a/ArmPkg/Include/Library/ArmMmuLib.h b/ArmPkg/Include/Library/ArmMmuLib.h
index fb7fd006417c..d2725810f1c6 100644
--- a/ArmPkg/Include/Library/ArmMmuLib.h
+++ b/ArmPkg/Include/Library/ArmMmuLib.h
@@ -59,7 +59,8 @@ VOID
EFIAPI
ArmReplaceLiveTranslationEntry (
IN UINT64 *Entry,
- IN UINT64 Value
+ IN UINT64 Value,
+ IN UINT64 Address
);
EFI_STATUS
diff --git a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
index b7173e00b039..175fb58206b6 100644
--- a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
+++ b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
@@ -124,15 +124,15 @@ ASM_FUNC(ArmSetMAIR)
// IN VOID *MVA // X1
// );
ASM_FUNC(ArmUpdateTranslationTableEntry)
- dc civac, x0 // Clean and invalidate data line
- dsb sy
+ dsb nshst
+ lsr x1, x1, #12
EL1_OR_EL2_OR_EL3(x0)
1: tlbi vaae1, x1 // TLB Invalidate VA , EL1
b 4f
2: tlbi vae2, x1 // TLB Invalidate VA , EL2
b 4f
3: tlbi vae3, x1 // TLB Invalidate VA , EL3
-4: dsb sy
+4: dsb nsh
isb
ret
diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
index d66df3e17a02..e1fabfcbea14 100644
--- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
+++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
@@ -129,13 +129,14 @@ STATIC
VOID
ReplaceLiveEntry (
IN UINT64 *Entry,
- IN UINT64 Value
+ IN UINT64 Value,
+ IN UINT64 Address
)
{
if (!ArmMmuEnabled ()) {
*Entry = Value;
} else {
- ArmReplaceLiveTranslationEntry (Entry, Value);
+ ArmReplaceLiveTranslationEntry (Entry, Value, Address);
}
}
@@ -296,7 +297,8 @@ GetBlockEntryListFromAddress (
// Fill the BlockEntry with the new TranslationTable
ReplaceLiveEntry (BlockEntry,
- ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TableAttributes | TT_TYPE_TABLE_ENTRY);
+ (UINTN)TranslationTable | TableAttributes | TT_TYPE_TABLE_ENTRY,
+ RegionStart);
}
} else {
if (IndexLevel != PageLevel) {
@@ -375,6 +377,8 @@ UpdateRegionMapping (
*BlockEntry &= BlockEntryMask;
*BlockEntry |= (RegionStart & TT_ADDRESS_MASK_BLOCK_ENTRY) | Attributes | Type;
+ ArmUpdateTranslationTableEntry (BlockEntry, (VOID *)RegionStart);
+
// Go to the next BlockEntry
RegionStart += BlockEntrySize;
RegionLength -= BlockEntrySize;
@@ -487,9 +491,6 @@ ArmSetMemoryAttributes (
return Status;
}
- // Invalidate all TLB entries so changes are synced
- ArmInvalidateTlb ();
-
return EFI_SUCCESS;
}
@@ -512,9 +513,6 @@ SetMemoryRegionAttribute (
return Status;
}
- // Invalidate all TLB entries so changes are synced
- ArmInvalidateTlb ();
-
return EFI_SUCCESS;
}
diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
index 90192df24f55..d40c19b2e3e5 100644
--- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
+++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
@@ -32,11 +32,12 @@
dmb sy
dc ivac, x0
- // flush the TLBs
+ // flush translations for the target address from the TLBs
+ lsr x2, x2, #12
.if \el == 1
- tlbi vmalle1
+ tlbi vaae1, x2
.else
- tlbi alle\el
+ tlbi vae\el, x2
.endif
dsb sy
@@ -48,12 +49,13 @@
//VOID
//ArmReplaceLiveTranslationEntry (
// IN UINT64 *Entry,
-// IN UINT64 Value
+// IN UINT64 Value,
+// IN UINT64 Address
// )
ASM_FUNC(ArmReplaceLiveTranslationEntry)
// disable interrupts
- mrs x2, daif
+ mrs x4, daif
msr daifset, #0xf
isb
@@ -69,7 +71,7 @@ ASM_FUNC(ArmReplaceLiveTranslationEntry)
b 4f
3:__replace_entry 3
-4:msr daif, x2
+4:msr daif, x4
ret
ASM_GLOBAL ASM_PFX(ArmReplaceLiveTranslationEntrySize)
--
2.20.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
2019-01-07 7:15 ` [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation Ard Biesheuvel
@ 2019-01-23 15:46 ` Leif Lindholm
2019-01-23 15:55 ` Ard Biesheuvel
0 siblings, 1 reply; 19+ messages in thread
From: Leif Lindholm @ 2019-01-23 15:46 UTC (permalink / raw)
To: Ard Biesheuvel; +Cc: edk2-devel
On Mon, Jan 07, 2019 at 08:15:01AM +0100, Ard Biesheuvel wrote:
> Currently, we always invalidate the TLBs entirely after making
> any modification to the page tables. Now that we have introduced
> strict memory permissions in quite a number of places, such
> modifications occur much more often, and it is better for performance
> to flush only those TLB entries that are actually affected by
> the changes.
>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> ArmPkg/Include/Library/ArmMmuLib.h | 3 ++-
> ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S | 6 +++---
> ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 16 +++++++---------
> ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S | 14 ++++++++------
> 4 files changed, 20 insertions(+), 19 deletions(-)
>
> diff --git a/ArmPkg/Include/Library/ArmMmuLib.h b/ArmPkg/Include/Library/ArmMmuLib.h
> index fb7fd006417c..d2725810f1c6 100644
> --- a/ArmPkg/Include/Library/ArmMmuLib.h
> +++ b/ArmPkg/Include/Library/ArmMmuLib.h
> @@ -59,7 +59,8 @@ VOID
> EFIAPI
> ArmReplaceLiveTranslationEntry (
> IN UINT64 *Entry,
> - IN UINT64 Value
> + IN UINT64 Value,
> + IN UINT64 Address
> );
>
> EFI_STATUS
> diff --git a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> index b7173e00b039..175fb58206b6 100644
> --- a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> +++ b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> @@ -124,15 +124,15 @@ ASM_FUNC(ArmSetMAIR)
> // IN VOID *MVA // X1
> // );
> ASM_FUNC(ArmUpdateTranslationTableEntry)
> - dc civac, x0 // Clean and invalidate data line
> - dsb sy
> + dsb nshst
> + lsr x1, x1, #12
> EL1_OR_EL2_OR_EL3(x0)
> 1: tlbi vaae1, x1 // TLB Invalidate VA , EL1
> b 4f
> 2: tlbi vae2, x1 // TLB Invalidate VA , EL2
> b 4f
> 3: tlbi vae3, x1 // TLB Invalidate VA , EL3
> -4: dsb sy
> +4: dsb nsh
> isb
> ret
>
> diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> index d66df3e17a02..e1fabfcbea14 100644
> --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> @@ -129,13 +129,14 @@ STATIC
> VOID
> ReplaceLiveEntry (
> IN UINT64 *Entry,
> - IN UINT64 Value
> + IN UINT64 Value,
> + IN UINT64 Address
> )
> {
> if (!ArmMmuEnabled ()) {
> *Entry = Value;
> } else {
> - ArmReplaceLiveTranslationEntry (Entry, Value);
> + ArmReplaceLiveTranslationEntry (Entry, Value, Address);
> }
> }
>
> @@ -296,7 +297,8 @@ GetBlockEntryListFromAddress (
>
> // Fill the BlockEntry with the new TranslationTable
> ReplaceLiveEntry (BlockEntry,
> - ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TableAttributes | TT_TYPE_TABLE_ENTRY);
> + (UINTN)TranslationTable | TableAttributes | TT_TYPE_TABLE_ENTRY,
> + RegionStart);
OK, this whole patch took a few times around the loop before I think I
caught on what was happening.
I think I'm down to the only things confusing me being:
- The name "Address" to refer to something that is always the start
address of a 4KB-aligned translation region.
Is this because the function will be usable in other contexts in
later patches?
- Why drop the & TT_ADDRESS_MASK_DESCRIPTION_TABLE bit here?
Was it just always pointless and you decided to drop it while you
were at it?
/
Leif
> }
> } else {
> if (IndexLevel != PageLevel) {
> @@ -375,6 +377,8 @@ UpdateRegionMapping (
> *BlockEntry &= BlockEntryMask;
> *BlockEntry |= (RegionStart & TT_ADDRESS_MASK_BLOCK_ENTRY) | Attributes | Type;
>
> + ArmUpdateTranslationTableEntry (BlockEntry, (VOID *)RegionStart);
> +
> // Go to the next BlockEntry
> RegionStart += BlockEntrySize;
> RegionLength -= BlockEntrySize;
> @@ -487,9 +491,6 @@ ArmSetMemoryAttributes (
> return Status;
> }
>
> - // Invalidate all TLB entries so changes are synced
> - ArmInvalidateTlb ();
> -
> return EFI_SUCCESS;
> }
>
> @@ -512,9 +513,6 @@ SetMemoryRegionAttribute (
> return Status;
> }
>
> - // Invalidate all TLB entries so changes are synced
> - ArmInvalidateTlb ();
> -
> return EFI_SUCCESS;
> }
>
> diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> index 90192df24f55..d40c19b2e3e5 100644
> --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> @@ -32,11 +32,12 @@
> dmb sy
> dc ivac, x0
>
> - // flush the TLBs
> + // flush translations for the target address from the TLBs
> + lsr x2, x2, #12
> .if \el == 1
> - tlbi vmalle1
> + tlbi vaae1, x2
> .else
> - tlbi alle\el
> + tlbi vae\el, x2
> .endif
> dsb sy
>
> @@ -48,12 +49,13 @@
> //VOID
> //ArmReplaceLiveTranslationEntry (
> // IN UINT64 *Entry,
> -// IN UINT64 Value
> +// IN UINT64 Value,
> +// IN UINT64 Address
> // )
> ASM_FUNC(ArmReplaceLiveTranslationEntry)
>
> // disable interrupts
> - mrs x2, daif
> + mrs x4, daif
> msr daifset, #0xf
> isb
>
> @@ -69,7 +71,7 @@ ASM_FUNC(ArmReplaceLiveTranslationEntry)
> b 4f
> 3:__replace_entry 3
>
> -4:msr daif, x2
> +4:msr daif, x4
> ret
>
> ASM_GLOBAL ASM_PFX(ArmReplaceLiveTranslationEntrySize)
> --
> 2.20.1
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
2019-01-23 15:46 ` Leif Lindholm
@ 2019-01-23 15:55 ` Ard Biesheuvel
2019-01-23 16:12 ` Leif Lindholm
0 siblings, 1 reply; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-23 15:55 UTC (permalink / raw)
To: Leif Lindholm; +Cc: edk2-devel@lists.01.org
On Wed, 23 Jan 2019 at 16:46, Leif Lindholm <leif.lindholm@linaro.org> wrote:
>
> On Mon, Jan 07, 2019 at 08:15:01AM +0100, Ard Biesheuvel wrote:
> > Currently, we always invalidate the TLBs entirely after making
> > any modification to the page tables. Now that we have introduced
> > strict memory permissions in quite a number of places, such
> > modifications occur much more often, and it is better for performance
> > to flush only those TLB entries that are actually affected by
> > the changes.
> >
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > ---
> > ArmPkg/Include/Library/ArmMmuLib.h | 3 ++-
> > ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S | 6 +++---
> > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 16 +++++++---------
> > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S | 14 ++++++++------
> > 4 files changed, 20 insertions(+), 19 deletions(-)
> >
> > diff --git a/ArmPkg/Include/Library/ArmMmuLib.h b/ArmPkg/Include/Library/ArmMmuLib.h
> > index fb7fd006417c..d2725810f1c6 100644
> > --- a/ArmPkg/Include/Library/ArmMmuLib.h
> > +++ b/ArmPkg/Include/Library/ArmMmuLib.h
> > @@ -59,7 +59,8 @@ VOID
> > EFIAPI
> > ArmReplaceLiveTranslationEntry (
> > IN UINT64 *Entry,
> > - IN UINT64 Value
> > + IN UINT64 Value,
> > + IN UINT64 Address
> > );
> >
> > EFI_STATUS
> > diff --git a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> > index b7173e00b039..175fb58206b6 100644
> > --- a/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> > +++ b/ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S
> > @@ -124,15 +124,15 @@ ASM_FUNC(ArmSetMAIR)
> > // IN VOID *MVA // X1
> > // );
> > ASM_FUNC(ArmUpdateTranslationTableEntry)
> > - dc civac, x0 // Clean and invalidate data line
> > - dsb sy
> > + dsb nshst
> > + lsr x1, x1, #12
> > EL1_OR_EL2_OR_EL3(x0)
> > 1: tlbi vaae1, x1 // TLB Invalidate VA , EL1
> > b 4f
> > 2: tlbi vae2, x1 // TLB Invalidate VA , EL2
> > b 4f
> > 3: tlbi vae3, x1 // TLB Invalidate VA , EL3
> > -4: dsb sy
> > +4: dsb nsh
> > isb
> > ret
> >
> > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > index d66df3e17a02..e1fabfcbea14 100644
> > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > @@ -129,13 +129,14 @@ STATIC
> > VOID
> > ReplaceLiveEntry (
> > IN UINT64 *Entry,
> > - IN UINT64 Value
> > + IN UINT64 Value,
> > + IN UINT64 Address
> > )
> > {
> > if (!ArmMmuEnabled ()) {
> > *Entry = Value;
> > } else {
> > - ArmReplaceLiveTranslationEntry (Entry, Value);
> > + ArmReplaceLiveTranslationEntry (Entry, Value, Address);
> > }
> > }
> >
> > @@ -296,7 +297,8 @@ GetBlockEntryListFromAddress (
> >
> > // Fill the BlockEntry with the new TranslationTable
> > ReplaceLiveEntry (BlockEntry,
> > - ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TableAttributes | TT_TYPE_TABLE_ENTRY);
> > + (UINTN)TranslationTable | TableAttributes | TT_TYPE_TABLE_ENTRY,
> > + RegionStart);
>
/me pages in the data ...
> OK, this whole patch took a few times around the loop before I think I
> caught on what was happening.
>
> I think I'm down to the only things confusing me being:
> - The name "Address" to refer to something that is always the start
> address of a 4KB-aligned translation region.
> Is this because the function will be usable in other contexts in
> later patches?
I could change it to VirtualAddress if you prefer. It is only passed
for TLB maintenance which is only needed at page granularity, and the
low bits are shifted out anyway.
> - Why drop the & TT_ADDRESS_MASK_DESCRIPTION_TABLE bit here?
> Was it just always pointless and you decided to drop it while you
> were at it?
>
IIRC yes. It is a newly allocated page, so the masking does not do anything.
> /
> Leif
>
> > }
> > } else {
> > if (IndexLevel != PageLevel) {
> > @@ -375,6 +377,8 @@ UpdateRegionMapping (
> > *BlockEntry &= BlockEntryMask;
> > *BlockEntry |= (RegionStart & TT_ADDRESS_MASK_BLOCK_ENTRY) | Attributes | Type;
> >
> > + ArmUpdateTranslationTableEntry (BlockEntry, (VOID *)RegionStart);
> > +
> > // Go to the next BlockEntry
> > RegionStart += BlockEntrySize;
> > RegionLength -= BlockEntrySize;
> > @@ -487,9 +491,6 @@ ArmSetMemoryAttributes (
> > return Status;
> > }
> >
> > - // Invalidate all TLB entries so changes are synced
> > - ArmInvalidateTlb ();
> > -
> > return EFI_SUCCESS;
> > }
> >
> > @@ -512,9 +513,6 @@ SetMemoryRegionAttribute (
> > return Status;
> > }
> >
> > - // Invalidate all TLB entries so changes are synced
> > - ArmInvalidateTlb ();
> > -
> > return EFI_SUCCESS;
> > }
> >
> > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> > index 90192df24f55..d40c19b2e3e5 100644
> > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> > @@ -32,11 +32,12 @@
> > dmb sy
> > dc ivac, x0
> >
> > - // flush the TLBs
> > + // flush translations for the target address from the TLBs
> > + lsr x2, x2, #12
> > .if \el == 1
> > - tlbi vmalle1
> > + tlbi vaae1, x2
> > .else
> > - tlbi alle\el
> > + tlbi vae\el, x2
> > .endif
> > dsb sy
> >
> > @@ -48,12 +49,13 @@
> > //VOID
> > //ArmReplaceLiveTranslationEntry (
> > // IN UINT64 *Entry,
> > -// IN UINT64 Value
> > +// IN UINT64 Value,
> > +// IN UINT64 Address
> > // )
> > ASM_FUNC(ArmReplaceLiveTranslationEntry)
> >
> > // disable interrupts
> > - mrs x2, daif
> > + mrs x4, daif
> > msr daifset, #0xf
> > isb
> >
> > @@ -69,7 +71,7 @@ ASM_FUNC(ArmReplaceLiveTranslationEntry)
> > b 4f
> > 3:__replace_entry 3
> >
> > -4:msr daif, x2
> > +4:msr daif, x4
> > ret
> >
> > ASM_GLOBAL ASM_PFX(ArmReplaceLiveTranslationEntrySize)
> > --
> > 2.20.1
> >
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
2019-01-23 15:55 ` Ard Biesheuvel
@ 2019-01-23 16:12 ` Leif Lindholm
2019-01-23 16:16 ` Ard Biesheuvel
0 siblings, 1 reply; 19+ messages in thread
From: Leif Lindholm @ 2019-01-23 16:12 UTC (permalink / raw)
To: Ard Biesheuvel; +Cc: edk2-devel@lists.01.org
On Wed, Jan 23, 2019 at 04:55:28PM +0100, Ard Biesheuvel wrote:
> On Wed, 23 Jan 2019 at 16:46, Leif Lindholm <leif.lindholm@linaro.org> wrote:
> >
> > On Mon, Jan 07, 2019 at 08:15:01AM +0100, Ard Biesheuvel wrote:
> > > Currently, we always invalidate the TLBs entirely after making
> > > any modification to the page tables. Now that we have introduced
> > > strict memory permissions in quite a number of places, such
> > > modifications occur much more often, and it is better for performance
> > > to flush only those TLB entries that are actually affected by
> > > the changes.
> > >
> > > Contributed-under: TianoCore Contribution Agreement 1.1
> > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > > ---
> > > ArmPkg/Include/Library/ArmMmuLib.h | 3 ++-
> > > ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S | 6 +++---
> > > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 16 +++++++---------
> > > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S | 14 ++++++++------
> > > 4 files changed, 20 insertions(+), 19 deletions(-)
> > >
> > > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > index d66df3e17a02..e1fabfcbea14 100644
> > > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > @@ -129,13 +129,14 @@ STATIC
> > > VOID
> > > ReplaceLiveEntry (
> > > IN UINT64 *Entry,
> > > - IN UINT64 Value
> > > + IN UINT64 Value,
> > > + IN UINT64 Address
> > > )
> > > {
> > > if (!ArmMmuEnabled ()) {
> > > *Entry = Value;
> > > } else {
> > > - ArmReplaceLiveTranslationEntry (Entry, Value);
> > > + ArmReplaceLiveTranslationEntry (Entry, Value, Address);
> > > }
> > > }
> > >
> > > @@ -296,7 +297,8 @@ GetBlockEntryListFromAddress (
> > >
> > > // Fill the BlockEntry with the new TranslationTable
> > > ReplaceLiveEntry (BlockEntry,
> > > - ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TableAttributes | TT_TYPE_TABLE_ENTRY);
> > > + (UINTN)TranslationTable | TableAttributes | TT_TYPE_TABLE_ENTRY,
> > > + RegionStart);
> >
>
> /me pages in the data ...
>
> > OK, this whole patch took a few times around the loop before I think I
> > caught on what was happening.
> >
> > I think I'm down to the only things confusing me being:
> > - The name "Address" to refer to something that is always the start
> > address of a 4KB-aligned translation region.
> > Is this because the function will be usable in other contexts in
> > later patches?
>
> I could change it to VirtualAddress if you prefer.
> It is only passed
> for TLB maintenance which is only needed at page granularity, and the
> low bits are shifted out anyway.
Yeah, exactly. It would just be nice if the name reflected that. Not
sure VirtualAddress does. VirtualBase? PageBase?
> > - Why drop the & TT_ADDRESS_MASK_DESCRIPTION_TABLE bit here?
> > Was it just always pointless and you decided to drop it while you
> > were at it?
>
> IIRC yes. It is a newly allocated page, so the masking does not do anything.
Yeah, that's fair enough.
Just made me wonder if anything unobvious was going on :)
/
Leif
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
2019-01-23 16:12 ` Leif Lindholm
@ 2019-01-23 16:16 ` Ard Biesheuvel
2019-01-23 16:20 ` Leif Lindholm
0 siblings, 1 reply; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-23 16:16 UTC (permalink / raw)
To: Leif Lindholm; +Cc: edk2-devel@lists.01.org
On Wed, 23 Jan 2019 at 17:13, Leif Lindholm <leif.lindholm@linaro.org> wrote:
>
> On Wed, Jan 23, 2019 at 04:55:28PM +0100, Ard Biesheuvel wrote:
> > On Wed, 23 Jan 2019 at 16:46, Leif Lindholm <leif.lindholm@linaro.org> wrote:
> > >
> > > On Mon, Jan 07, 2019 at 08:15:01AM +0100, Ard Biesheuvel wrote:
> > > > Currently, we always invalidate the TLBs entirely after making
> > > > any modification to the page tables. Now that we have introduced
> > > > strict memory permissions in quite a number of places, such
> > > > modifications occur much more often, and it is better for performance
> > > > to flush only those TLB entries that are actually affected by
> > > > the changes.
> > > >
> > > > Contributed-under: TianoCore Contribution Agreement 1.1
> > > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > > > ---
> > > > ArmPkg/Include/Library/ArmMmuLib.h | 3 ++-
> > > > ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S | 6 +++---
> > > > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 16 +++++++---------
> > > > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S | 14 ++++++++------
> > > > 4 files changed, 20 insertions(+), 19 deletions(-)
> > > >
> > > > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > > index d66df3e17a02..e1fabfcbea14 100644
> > > > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > > @@ -129,13 +129,14 @@ STATIC
> > > > VOID
> > > > ReplaceLiveEntry (
> > > > IN UINT64 *Entry,
> > > > - IN UINT64 Value
> > > > + IN UINT64 Value,
> > > > + IN UINT64 Address
> > > > )
> > > > {
> > > > if (!ArmMmuEnabled ()) {
> > > > *Entry = Value;
> > > > } else {
> > > > - ArmReplaceLiveTranslationEntry (Entry, Value);
> > > > + ArmReplaceLiveTranslationEntry (Entry, Value, Address);
> > > > }
> > > > }
> > > >
> > > > @@ -296,7 +297,8 @@ GetBlockEntryListFromAddress (
> > > >
> > > > // Fill the BlockEntry with the new TranslationTable
> > > > ReplaceLiveEntry (BlockEntry,
> > > > - ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TableAttributes | TT_TYPE_TABLE_ENTRY);
> > > > + (UINTN)TranslationTable | TableAttributes | TT_TYPE_TABLE_ENTRY,
> > > > + RegionStart);
> > >
> >
> > /me pages in the data ...
> >
> > > OK, this whole patch took a few times around the loop before I think I
> > > caught on what was happening.
> > >
> > > I think I'm down to the only things confusing me being:
> > > - The name "Address" to refer to something that is always the start
> > > address of a 4KB-aligned translation region.
> > > Is this because the function will be usable in other contexts in
> > > later patches?
> >
> > I could change it to VirtualAddress if you prefer.
> > It is only passed
> > for TLB maintenance which is only needed at page granularity, and the
> > low bits are shifted out anyway.
>
> Yeah, exactly. It would just be nice if the name reflected that. Not
> sure VirtualAddress does. VirtualBase? PageBase?
>
Well, the argument passed in is called RegionStart, so let's just
stick with that.
> > > - Why drop the & TT_ADDRESS_MASK_DESCRIPTION_TABLE bit here?
> > > Was it just always pointless and you decided to drop it while you
> > > were at it?
> >
> > IIRC yes. It is a newly allocated page, so the masking does not do anything.
>
> Yeah, that's fair enough.
> Just made me wonder if anything unobvious was going on :)
>
> /
> Leif
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
2019-01-23 16:16 ` Ard Biesheuvel
@ 2019-01-23 16:20 ` Leif Lindholm
2019-01-28 12:29 ` Ard Biesheuvel
0 siblings, 1 reply; 19+ messages in thread
From: Leif Lindholm @ 2019-01-23 16:20 UTC (permalink / raw)
To: Ard Biesheuvel; +Cc: edk2-devel@lists.01.org
On Wed, Jan 23, 2019 at 05:16:57PM +0100, Ard Biesheuvel wrote:
> On Wed, 23 Jan 2019 at 17:13, Leif Lindholm <leif.lindholm@linaro.org> wrote:
> >
> > On Wed, Jan 23, 2019 at 04:55:28PM +0100, Ard Biesheuvel wrote:
> > > On Wed, 23 Jan 2019 at 16:46, Leif Lindholm <leif.lindholm@linaro.org> wrote:
> > > >
> > > > On Mon, Jan 07, 2019 at 08:15:01AM +0100, Ard Biesheuvel wrote:
> > > > > Currently, we always invalidate the TLBs entirely after making
> > > > > any modification to the page tables. Now that we have introduced
> > > > > strict memory permissions in quite a number of places, such
> > > > > modifications occur much more often, and it is better for performance
> > > > > to flush only those TLB entries that are actually affected by
> > > > > the changes.
> > > > >
> > > > > Contributed-under: TianoCore Contribution Agreement 1.1
> > > > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > > > > ---
> > > > > ArmPkg/Include/Library/ArmMmuLib.h | 3 ++-
> > > > > ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S | 6 +++---
> > > > > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 16 +++++++---------
> > > > > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S | 14 ++++++++------
> > > > > 4 files changed, 20 insertions(+), 19 deletions(-)
> > > > >
> > > > > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > > > index d66df3e17a02..e1fabfcbea14 100644
> > > > > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > > > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > > > @@ -129,13 +129,14 @@ STATIC
> > > > > VOID
> > > > > ReplaceLiveEntry (
> > > > > IN UINT64 *Entry,
> > > > > - IN UINT64 Value
> > > > > + IN UINT64 Value,
> > > > > + IN UINT64 Address
> > > > > )
> > > > > {
> > > > > if (!ArmMmuEnabled ()) {
> > > > > *Entry = Value;
> > > > > } else {
> > > > > - ArmReplaceLiveTranslationEntry (Entry, Value);
> > > > > + ArmReplaceLiveTranslationEntry (Entry, Value, Address);
> > > > > }
> > > > > }
> > > > >
> > > > > @@ -296,7 +297,8 @@ GetBlockEntryListFromAddress (
> > > > >
> > > > > // Fill the BlockEntry with the new TranslationTable
> > > > > ReplaceLiveEntry (BlockEntry,
> > > > > - ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TableAttributes | TT_TYPE_TABLE_ENTRY);
> > > > > + (UINTN)TranslationTable | TableAttributes | TT_TYPE_TABLE_ENTRY,
> > > > > + RegionStart);
> > > >
> > >
> > > /me pages in the data ...
> > >
> > > > OK, this whole patch took a few times around the loop before I think I
> > > > caught on what was happening.
> > > >
> > > > I think I'm down to the only things confusing me being:
> > > > - The name "Address" to refer to something that is always the start
> > > > address of a 4KB-aligned translation region.
> > > > Is this because the function will be usable in other contexts in
> > > > later patches?
> > >
> > > I could change it to VirtualAddress if you prefer.
> > > It is only passed
> > > for TLB maintenance which is only needed at page granularity, and the
> > > low bits are shifted out anyway.
> >
> > Yeah, exactly. It would just be nice if the name reflected that. Not
> > sure VirtualAddress does. VirtualBase? PageBase?
> >
>
> Well, the argument passed in is called RegionStart, so let's just
> stick with that.
Sure. With that:
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
> > > > - Why drop the & TT_ADDRESS_MASK_DESCRIPTION_TABLE bit here?
> > > > Was it just always pointless and you decided to drop it while you
> > > > were at it?
> > >
> > > IIRC yes. It is a newly allocated page, so the masking does not do anything.
> >
> > Yeah, that's fair enough.
> > Just made me wonder if anything unobvious was going on :)
> >
> > /
> > Leif
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
2019-01-23 16:20 ` Leif Lindholm
@ 2019-01-28 12:29 ` Ard Biesheuvel
2019-01-28 18:01 ` Leif Lindholm
0 siblings, 1 reply; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-28 12:29 UTC (permalink / raw)
To: Leif Lindholm; +Cc: edk2-devel@lists.01.org
On Wed, 23 Jan 2019 at 17:20, Leif Lindholm <leif.lindholm@linaro.org> wrote:
>
> On Wed, Jan 23, 2019 at 05:16:57PM +0100, Ard Biesheuvel wrote:
> > On Wed, 23 Jan 2019 at 17:13, Leif Lindholm <leif.lindholm@linaro.org> wrote:
> > >
> > > On Wed, Jan 23, 2019 at 04:55:28PM +0100, Ard Biesheuvel wrote:
> > > > On Wed, 23 Jan 2019 at 16:46, Leif Lindholm <leif.lindholm@linaro.org> wrote:
> > > > >
> > > > > On Mon, Jan 07, 2019 at 08:15:01AM +0100, Ard Biesheuvel wrote:
> > > > > > Currently, we always invalidate the TLBs entirely after making
> > > > > > any modification to the page tables. Now that we have introduced
> > > > > > strict memory permissions in quite a number of places, such
> > > > > > modifications occur much more often, and it is better for performance
> > > > > > to flush only those TLB entries that are actually affected by
> > > > > > the changes.
> > > > > >
> > > > > > Contributed-under: TianoCore Contribution Agreement 1.1
> > > > > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > > > > > ---
> > > > > > ArmPkg/Include/Library/ArmMmuLib.h | 3 ++-
> > > > > > ArmPkg/Library/ArmLib/AArch64/ArmLibSupport.S | 6 +++---
> > > > > > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 16 +++++++---------
> > > > > > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S | 14 ++++++++------
> > > > > > 4 files changed, 20 insertions(+), 19 deletions(-)
> > > > > >
> > > > > > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > > > > index d66df3e17a02..e1fabfcbea14 100644
> > > > > > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > > > > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > > > > @@ -129,13 +129,14 @@ STATIC
> > > > > > VOID
> > > > > > ReplaceLiveEntry (
> > > > > > IN UINT64 *Entry,
> > > > > > - IN UINT64 Value
> > > > > > + IN UINT64 Value,
> > > > > > + IN UINT64 Address
> > > > > > )
> > > > > > {
> > > > > > if (!ArmMmuEnabled ()) {
> > > > > > *Entry = Value;
> > > > > > } else {
> > > > > > - ArmReplaceLiveTranslationEntry (Entry, Value);
> > > > > > + ArmReplaceLiveTranslationEntry (Entry, Value, Address);
> > > > > > }
> > > > > > }
> > > > > >
> > > > > > @@ -296,7 +297,8 @@ GetBlockEntryListFromAddress (
> > > > > >
> > > > > > // Fill the BlockEntry with the new TranslationTable
> > > > > > ReplaceLiveEntry (BlockEntry,
> > > > > > - ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TableAttributes | TT_TYPE_TABLE_ENTRY);
> > > > > > + (UINTN)TranslationTable | TableAttributes | TT_TYPE_TABLE_ENTRY,
> > > > > > + RegionStart);
> > > > >
> > > >
> > > > /me pages in the data ...
> > > >
> > > > > OK, this whole patch took a few times around the loop before I think I
> > > > > caught on what was happening.
> > > > >
> > > > > I think I'm down to the only things confusing me being:
> > > > > - The name "Address" to refer to something that is always the start
> > > > > address of a 4KB-aligned translation region.
> > > > > Is this because the function will be usable in other contexts in
> > > > > later patches?
> > > >
> > > > I could change it to VirtualAddress if you prefer.
> > > > It is only passed
> > > > for TLB maintenance which is only needed at page granularity, and the
> > > > low bits are shifted out anyway.
> > >
> > > Yeah, exactly. It would just be nice if the name reflected that. Not
> > > sure VirtualAddress does. VirtualBase? PageBase?
> > >
> >
> > Well, the argument passed in is called RegionStart, so let's just
> > stick with that.
>
> Sure. With that:
> Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
>
Thanks
GIven the discussion in the other thread regarding shareability
upgrades of barriers and TLB maintenance instructions when running
under virt, mind if I fold in the hunk below? (and add a mention to
the commit log)
--- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
+++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
@@ -35,7 +35,7 @@
// flush translations for the target address from the TLBs
lsr x2, x2, #12
tlbi vae\el, x2
- dsb sy
+ dsb nsh
// re-enable the MMU
msr sctlr_el\el, x8
@@ -58,7 +58,7 @@ ASM_FUNC(ArmReplaceLiveTranslationEntry)
// clean and invalidate first so that we don't clobber
// adjacent entries that are dirty in the caches
dc civac, x0
- dsb ish
+ dsb nsh
EL1_OR_EL2_OR_EL3(x3)
1:__replace_entry 1
The first one reduces the scope of the tlb maintenance of a live entry
to the current CPU (and when running under virt, the hypervisor will
upgrade this to inner shareable)
The second one prevents the cache maintenance from being broadcast
unnecessarily, and will be upgraded back to ish in the same way when
running under virt.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
2019-01-28 12:29 ` Ard Biesheuvel
@ 2019-01-28 18:01 ` Leif Lindholm
2019-01-29 10:32 ` Ard Biesheuvel
0 siblings, 1 reply; 19+ messages in thread
From: Leif Lindholm @ 2019-01-28 18:01 UTC (permalink / raw)
To: Ard Biesheuvel; +Cc: edk2-devel@lists.01.org
On Mon, Jan 28, 2019 at 01:29:54PM +0100, Ard Biesheuvel wrote:
> > > > > > > @@ -296,7 +297,8 @@ GetBlockEntryListFromAddress (
> > > > > > >
> > > > > > > // Fill the BlockEntry with the new TranslationTable
> > > > > > > ReplaceLiveEntry (BlockEntry,
> > > > > > > - ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TableAttributes | TT_TYPE_TABLE_ENTRY);
> > > > > > > + (UINTN)TranslationTable | TableAttributes | TT_TYPE_TABLE_ENTRY,
> > > > > > > + RegionStart);
> > > > > >
> > > > >
> > > > > /me pages in the data ...
> > > > >
> > > > > > OK, this whole patch took a few times around the loop before I think I
> > > > > > caught on what was happening.
> > > > > >
> > > > > > I think I'm down to the only things confusing me being:
> > > > > > - The name "Address" to refer to something that is always the start
> > > > > > address of a 4KB-aligned translation region.
> > > > > > Is this because the function will be usable in other contexts in
> > > > > > later patches?
> > > > >
> > > > > I could change it to VirtualAddress if you prefer.
> > > > > It is only passed
> > > > > for TLB maintenance which is only needed at page granularity, and the
> > > > > low bits are shifted out anyway.
> > > >
> > > > Yeah, exactly. It would just be nice if the name reflected that. Not
> > > > sure VirtualAddress does. VirtualBase? PageBase?
> > > >
> > >
> > > Well, the argument passed in is called RegionStart, so let's just
> > > stick with that.
> >
> > Sure. With that:
> > Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
> >
>
> Thanks
>
> GIven the discussion in the other thread regarding shareability
> upgrades of barriers and TLB maintenance instructions when running
> under virt, mind if I fold in the hunk below? (and add a mention to
> the commit log)
>
> --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> @@ -35,7 +35,7 @@
> // flush translations for the target address from the TLBs
> lsr x2, x2, #12
> tlbi vae\el, x2
> - dsb sy
> + dsb nsh
So, this one, definitely. MMU is off, shareability is a shady concept
at best, so it's arguably a fix.
> // re-enable the MMU
> msr sctlr_el\el, x8
> @@ -58,7 +58,7 @@ ASM_FUNC(ArmReplaceLiveTranslationEntry)
> // clean and invalidate first so that we don't clobber
> // adjacent entries that are dirty in the caches
> dc civac, x0
> - dsb ish
> + dsb nsh
This one I guess is safe because we know we're never going to get
pre-empted or migrated (because interrupts are disabled and we don't
do that sort of thing)?
If that's the rationale:
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
/
Leif
> EL1_OR_EL2_OR_EL3(x3)
> 1:__replace_entry 1
>
> The first one reduces the scope of the tlb maintenance of a live entry
> to the current CPU (and when running under virt, the hypervisor will
> upgrade this to inner shareable)
> The second one prevents the cache maintenance from being broadcast
> unnecessarily, and will be upgraded back to ish in the same way when
> running under virt.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation
2019-01-28 18:01 ` Leif Lindholm
@ 2019-01-29 10:32 ` Ard Biesheuvel
0 siblings, 0 replies; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-29 10:32 UTC (permalink / raw)
To: Leif Lindholm; +Cc: edk2-devel@lists.01.org
On Mon, 28 Jan 2019 at 19:01, Leif Lindholm <leif.lindholm@linaro.org> wrote:
>
> On Mon, Jan 28, 2019 at 01:29:54PM +0100, Ard Biesheuvel wrote:
> > > > > > > > @@ -296,7 +297,8 @@ GetBlockEntryListFromAddress (
> > > > > > > >
> > > > > > > > // Fill the BlockEntry with the new TranslationTable
> > > > > > > > ReplaceLiveEntry (BlockEntry,
> > > > > > > > - ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TableAttributes | TT_TYPE_TABLE_ENTRY);
> > > > > > > > + (UINTN)TranslationTable | TableAttributes | TT_TYPE_TABLE_ENTRY,
> > > > > > > > + RegionStart);
> > > > > > >
> > > > > >
> > > > > > /me pages in the data ...
> > > > > >
> > > > > > > OK, this whole patch took a few times around the loop before I think I
> > > > > > > caught on what was happening.
> > > > > > >
> > > > > > > I think I'm down to the only things confusing me being:
> > > > > > > - The name "Address" to refer to something that is always the start
> > > > > > > address of a 4KB-aligned translation region.
> > > > > > > Is this because the function will be usable in other contexts in
> > > > > > > later patches?
> > > > > >
> > > > > > I could change it to VirtualAddress if you prefer.
> > > > > > It is only passed
> > > > > > for TLB maintenance which is only needed at page granularity, and the
> > > > > > low bits are shifted out anyway.
> > > > >
> > > > > Yeah, exactly. It would just be nice if the name reflected that. Not
> > > > > sure VirtualAddress does. VirtualBase? PageBase?
> > > > >
> > > >
> > > > Well, the argument passed in is called RegionStart, so let's just
> > > > stick with that.
> > >
> > > Sure. With that:
> > > Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
> > >
> >
> > Thanks
> >
> > GIven the discussion in the other thread regarding shareability
> > upgrades of barriers and TLB maintenance instructions when running
> > under virt, mind if I fold in the hunk below? (and add a mention to
> > the commit log)
> >
> > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibReplaceEntry.S
> > @@ -35,7 +35,7 @@
> > // flush translations for the target address from the TLBs
> > lsr x2, x2, #12
> > tlbi vae\el, x2
> > - dsb sy
> > + dsb nsh
>
> So, this one, definitely. MMU is off, shareability is a shady concept
> at best, so it's arguably a fix.
>
> > // re-enable the MMU
> > msr sctlr_el\el, x8
> > @@ -58,7 +58,7 @@ ASM_FUNC(ArmReplaceLiveTranslationEntry)
> > // clean and invalidate first so that we don't clobber
> > // adjacent entries that are dirty in the caches
> > dc civac, x0
> > - dsb ish
> > + dsb nsh
>
> This one I guess is safe because we know we're never going to get
> pre-empted or migrated (because interrupts are disabled and we don't
> do that sort of thing)?
>
Well, we can only get pre-empted under virt, in which case HCR_EL2.BSU
will be set so that barriers are upgraded to inner-shareable. In bare
metal cases, we only care about the local CPU since there are no other
masters (nor DMA capable devices) that care about this cache
maintenance having completed since the page tables are used by the CPU
only.
> If that's the rationale:
> Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>
>
Thanks
Pushed as f34b38fae614..d5788777bcc7
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 3/5] ArmPkg/ArmMmuLib AARCH64: implement support for EFI_MEMORY_RP permissions
2019-01-07 7:14 [PATCH 0/5] memory/MMU hardening for AArch64 Ard Biesheuvel
2019-01-07 7:15 ` [PATCH 1/5] ArmPkg/ArmMmuLib AARCH64: fix out of bounds access Ard Biesheuvel
2019-01-07 7:15 ` [PATCH 2/5] ArmPkg/ArmMmuLib AARCH64: get rid of needless TLB invalidation Ard Biesheuvel
@ 2019-01-07 7:15 ` Ard Biesheuvel
2019-01-14 14:29 ` Leif Lindholm
2019-01-07 7:15 ` [PATCH 4/5] ArmPkg/ArmMmuLib AARCH64: add support for read-only page tables Ard Biesheuvel
2019-01-07 7:15 ` [PATCH 5/5] ArmPkg/CpuDxe: switch to read-only page tables at EndOfDxe Ard Biesheuvel
4 siblings, 1 reply; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-07 7:15 UTC (permalink / raw)
To: edk2-devel
Wire up the access flag (AF) page table attribute to the EFI_MEMORY_RP
permission attribute, so that attempts to read from such a region will
trigger an access flag fault.
Note that this is a stronger notion than just read protection, since
it now implies that any write or execute attempt is trapped as well.
However, this does not really matter in practice since we never assume
that a read protected page is writable or executable, and StackGuard
and HeapGuard (which are the primary users of this facility) certainly
don't care.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c | 5 +++--
ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 14 +++++++++++---
2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c b/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
index 3e216c7cb235..e62e3fa87112 100644
--- a/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
+++ b/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
@@ -223,8 +223,9 @@ EfiAttributeToArmAttribute (
ArmAttributes = TT_ATTR_INDX_MASK;
}
- // Set the access flag to match the block attributes
- ArmAttributes |= TT_AF;
+ if ((EfiAttributes & EFI_MEMORY_RP) == 0) {
+ ArmAttributes |= TT_AF;
+ }
// Determine protection attributes
if (EfiAttributes & EFI_MEMORY_RO) {
diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
index e1fabfcbea14..b59c081a7e49 100644
--- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
+++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
@@ -102,6 +102,10 @@ PageAttributeToGcdAttribute (
GcdAttributes |= EFI_MEMORY_XP;
}
+ if ((PageAttributes & TT_AF) == 0) {
+ GcdAttributes |= EFI_MEMORY_RP;
+ }
+
return GcdAttributes;
}
@@ -451,7 +455,11 @@ GcdAttributeToPageAttribute (
PageAttributes |= TT_AP_RO_RO;
}
- return PageAttributes | TT_AF;
+ if ((GcdAttributes & EFI_MEMORY_RP) == 0) {
+ PageAttributes |= TT_AF;
+ }
+
+ return PageAttributes;
}
EFI_STATUS
@@ -474,9 +482,9 @@ ArmSetMemoryAttributes (
// No memory type was set in Attributes, so we are going to update the
// permissions only.
//
- PageAttributes &= TT_AP_MASK | TT_UXN_MASK | TT_PXN_MASK;
+ PageAttributes &= TT_AP_MASK | TT_UXN_MASK | TT_PXN_MASK | TT_AF;
PageAttributeMask = ~(TT_ADDRESS_MASK_BLOCK_ENTRY | TT_AP_MASK |
- TT_PXN_MASK | TT_XN_MASK);
+ TT_PXN_MASK | TT_XN_MASK | TT_AF);
}
TranslationTable = ArmGetTTBR0BaseAddress ();
--
2.20.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 3/5] ArmPkg/ArmMmuLib AARCH64: implement support for EFI_MEMORY_RP permissions
2019-01-07 7:15 ` [PATCH 3/5] ArmPkg/ArmMmuLib AARCH64: implement support for EFI_MEMORY_RP permissions Ard Biesheuvel
@ 2019-01-14 14:29 ` Leif Lindholm
2019-01-14 14:59 ` Ard Biesheuvel
0 siblings, 1 reply; 19+ messages in thread
From: Leif Lindholm @ 2019-01-14 14:29 UTC (permalink / raw)
To: Ard Biesheuvel, charles.garcia-tobin; +Cc: edk2-devel
On Mon, Jan 07, 2019 at 08:15:02AM +0100, Ard Biesheuvel wrote:
> Wire up the access flag (AF) page table attribute to the EFI_MEMORY_RP
> permission attribute, so that attempts to read from such a region will
> trigger an access flag fault.
>
> Note that this is a stronger notion than just read protection, since
> it now implies that any write or execute attempt is trapped as well.
> However, this does not really matter in practice since we never assume
> that a read protected page is writable or executable, and StackGuard
> and HeapGuard (which are the primary users of this facility) certainly
> don't care.
So ... I'm cautiously positive to this patch.
But this use does contradict the UEFI spec (2.7a, 2.3.6.1 Memory
types), which says EFI_MEMORY_RP is "not used or defined" for AArch64.
Charles?
/
Leif
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c | 5 +++--
> ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 14 +++++++++++---
> 2 files changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c b/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
> index 3e216c7cb235..e62e3fa87112 100644
> --- a/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
> +++ b/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
> @@ -223,8 +223,9 @@ EfiAttributeToArmAttribute (
> ArmAttributes = TT_ATTR_INDX_MASK;
> }
>
> - // Set the access flag to match the block attributes
> - ArmAttributes |= TT_AF;
> + if ((EfiAttributes & EFI_MEMORY_RP) == 0) {
> + ArmAttributes |= TT_AF;
> + }
>
> // Determine protection attributes
> if (EfiAttributes & EFI_MEMORY_RO) {
> diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> index e1fabfcbea14..b59c081a7e49 100644
> --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> @@ -102,6 +102,10 @@ PageAttributeToGcdAttribute (
> GcdAttributes |= EFI_MEMORY_XP;
> }
>
> + if ((PageAttributes & TT_AF) == 0) {
> + GcdAttributes |= EFI_MEMORY_RP;
> + }
> +
> return GcdAttributes;
> }
>
> @@ -451,7 +455,11 @@ GcdAttributeToPageAttribute (
> PageAttributes |= TT_AP_RO_RO;
> }
>
> - return PageAttributes | TT_AF;
> + if ((GcdAttributes & EFI_MEMORY_RP) == 0) {
> + PageAttributes |= TT_AF;
> + }
> +
> + return PageAttributes;
> }
>
> EFI_STATUS
> @@ -474,9 +482,9 @@ ArmSetMemoryAttributes (
> // No memory type was set in Attributes, so we are going to update the
> // permissions only.
> //
> - PageAttributes &= TT_AP_MASK | TT_UXN_MASK | TT_PXN_MASK;
> + PageAttributes &= TT_AP_MASK | TT_UXN_MASK | TT_PXN_MASK | TT_AF;
> PageAttributeMask = ~(TT_ADDRESS_MASK_BLOCK_ENTRY | TT_AP_MASK |
> - TT_PXN_MASK | TT_XN_MASK);
> + TT_PXN_MASK | TT_XN_MASK | TT_AF);
> }
>
> TranslationTable = ArmGetTTBR0BaseAddress ();
> --
> 2.20.1
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 3/5] ArmPkg/ArmMmuLib AARCH64: implement support for EFI_MEMORY_RP permissions
2019-01-14 14:29 ` Leif Lindholm
@ 2019-01-14 14:59 ` Ard Biesheuvel
2019-01-14 15:06 ` Leif Lindholm
0 siblings, 1 reply; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-14 14:59 UTC (permalink / raw)
To: Leif Lindholm; +Cc: Charles Garcia-Tobin, edk2-devel@lists.01.org
On Mon, 14 Jan 2019 at 15:29, Leif Lindholm <leif.lindholm@linaro.org> wrote:
>
> On Mon, Jan 07, 2019 at 08:15:02AM +0100, Ard Biesheuvel wrote:
> > Wire up the access flag (AF) page table attribute to the EFI_MEMORY_RP
> > permission attribute, so that attempts to read from such a region will
> > trigger an access flag fault.
> >
> > Note that this is a stronger notion than just read protection, since
> > it now implies that any write or execute attempt is trapped as well.
> > However, this does not really matter in practice since we never assume
> > that a read protected page is writable or executable, and StackGuard
> > and HeapGuard (which are the primary users of this facility) certainly
> > don't care.
>
> So ... I'm cautiously positive to this patch.
> But this use does contradict the UEFI spec (2.7a, 2.3.6.1 Memory
> types), which says EFI_MEMORY_RP is "not used or defined" for AArch64.
>
> Charles?
>
Not defined by the spec means we can use it do whatever we bloody want
with it, at least that is what a typical compiler engineer will tell
you :-)
I think there was a pending ECR to update the AArch64 binding code to
reflect reality, but I don't think we included EFI_MEMORY_RP.
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > ---
> > ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c | 5 +++--
> > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 14 +++++++++++---
> > 2 files changed, 14 insertions(+), 5 deletions(-)
> >
> > diff --git a/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c b/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
> > index 3e216c7cb235..e62e3fa87112 100644
> > --- a/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
> > +++ b/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
> > @@ -223,8 +223,9 @@ EfiAttributeToArmAttribute (
> > ArmAttributes = TT_ATTR_INDX_MASK;
> > }
> >
> > - // Set the access flag to match the block attributes
> > - ArmAttributes |= TT_AF;
> > + if ((EfiAttributes & EFI_MEMORY_RP) == 0) {
> > + ArmAttributes |= TT_AF;
> > + }
> >
> > // Determine protection attributes
> > if (EfiAttributes & EFI_MEMORY_RO) {
> > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > index e1fabfcbea14..b59c081a7e49 100644
> > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > @@ -102,6 +102,10 @@ PageAttributeToGcdAttribute (
> > GcdAttributes |= EFI_MEMORY_XP;
> > }
> >
> > + if ((PageAttributes & TT_AF) == 0) {
> > + GcdAttributes |= EFI_MEMORY_RP;
> > + }
> > +
> > return GcdAttributes;
> > }
> >
> > @@ -451,7 +455,11 @@ GcdAttributeToPageAttribute (
> > PageAttributes |= TT_AP_RO_RO;
> > }
> >
> > - return PageAttributes | TT_AF;
> > + if ((GcdAttributes & EFI_MEMORY_RP) == 0) {
> > + PageAttributes |= TT_AF;
> > + }
> > +
> > + return PageAttributes;
> > }
> >
> > EFI_STATUS
> > @@ -474,9 +482,9 @@ ArmSetMemoryAttributes (
> > // No memory type was set in Attributes, so we are going to update the
> > // permissions only.
> > //
> > - PageAttributes &= TT_AP_MASK | TT_UXN_MASK | TT_PXN_MASK;
> > + PageAttributes &= TT_AP_MASK | TT_UXN_MASK | TT_PXN_MASK | TT_AF;
> > PageAttributeMask = ~(TT_ADDRESS_MASK_BLOCK_ENTRY | TT_AP_MASK |
> > - TT_PXN_MASK | TT_XN_MASK);
> > + TT_PXN_MASK | TT_XN_MASK | TT_AF);
> > }
> >
> > TranslationTable = ArmGetTTBR0BaseAddress ();
> > --
> > 2.20.1
> >
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 3/5] ArmPkg/ArmMmuLib AARCH64: implement support for EFI_MEMORY_RP permissions
2019-01-14 14:59 ` Ard Biesheuvel
@ 2019-01-14 15:06 ` Leif Lindholm
0 siblings, 0 replies; 19+ messages in thread
From: Leif Lindholm @ 2019-01-14 15:06 UTC (permalink / raw)
To: Ard Biesheuvel; +Cc: Charles Garcia-Tobin, edk2-devel@lists.01.org
On Mon, Jan 14, 2019 at 03:59:08PM +0100, Ard Biesheuvel wrote:
> On Mon, 14 Jan 2019 at 15:29, Leif Lindholm <leif.lindholm@linaro.org> wrote:
> >
> > On Mon, Jan 07, 2019 at 08:15:02AM +0100, Ard Biesheuvel wrote:
> > > Wire up the access flag (AF) page table attribute to the EFI_MEMORY_RP
> > > permission attribute, so that attempts to read from such a region will
> > > trigger an access flag fault.
> > >
> > > Note that this is a stronger notion than just read protection, since
> > > it now implies that any write or execute attempt is trapped as well.
> > > However, this does not really matter in practice since we never assume
> > > that a read protected page is writable or executable, and StackGuard
> > > and HeapGuard (which are the primary users of this facility) certainly
> > > don't care.
> >
> > So ... I'm cautiously positive to this patch.
> > But this use does contradict the UEFI spec (2.7a, 2.3.6.1 Memory
> > types), which says EFI_MEMORY_RP is "not used or defined" for AArch64.
> >
> > Charles?
>
> Not defined by the spec means we can use it do whatever we bloody want
> with it, at least that is what a typical compiler engineer will tell
> you :-)
Not defined, yes. Not used, less so. For a reciprocal compiler
engineer analogy, think pointer tagging :)
> I think there was a pending ECR to update the AArch64 binding code to
> reflect reality, but I don't think we included EFI_MEMORY_RP.
Right.
Anyway, I'm reasonably happy with stretching the rules slightly here,
and I believe it to be safe, but I do want Charles' take on it.
/
Leif
> > > Contributed-under: TianoCore Contribution Agreement 1.1
> > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > > ---
> > > ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c | 5 +++--
> > > ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 14 +++++++++++---
> > > 2 files changed, 14 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c b/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
> > > index 3e216c7cb235..e62e3fa87112 100644
> > > --- a/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
> > > +++ b/ArmPkg/Drivers/CpuDxe/AArch64/Mmu.c
> > > @@ -223,8 +223,9 @@ EfiAttributeToArmAttribute (
> > > ArmAttributes = TT_ATTR_INDX_MASK;
> > > }
> > >
> > > - // Set the access flag to match the block attributes
> > > - ArmAttributes |= TT_AF;
> > > + if ((EfiAttributes & EFI_MEMORY_RP) == 0) {
> > > + ArmAttributes |= TT_AF;
> > > + }
> > >
> > > // Determine protection attributes
> > > if (EfiAttributes & EFI_MEMORY_RO) {
> > > diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > index e1fabfcbea14..b59c081a7e49 100644
> > > --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> > > @@ -102,6 +102,10 @@ PageAttributeToGcdAttribute (
> > > GcdAttributes |= EFI_MEMORY_XP;
> > > }
> > >
> > > + if ((PageAttributes & TT_AF) == 0) {
> > > + GcdAttributes |= EFI_MEMORY_RP;
> > > + }
> > > +
> > > return GcdAttributes;
> > > }
> > >
> > > @@ -451,7 +455,11 @@ GcdAttributeToPageAttribute (
> > > PageAttributes |= TT_AP_RO_RO;
> > > }
> > >
> > > - return PageAttributes | TT_AF;
> > > + if ((GcdAttributes & EFI_MEMORY_RP) == 0) {
> > > + PageAttributes |= TT_AF;
> > > + }
> > > +
> > > + return PageAttributes;
> > > }
> > >
> > > EFI_STATUS
> > > @@ -474,9 +482,9 @@ ArmSetMemoryAttributes (
> > > // No memory type was set in Attributes, so we are going to update the
> > > // permissions only.
> > > //
> > > - PageAttributes &= TT_AP_MASK | TT_UXN_MASK | TT_PXN_MASK;
> > > + PageAttributes &= TT_AP_MASK | TT_UXN_MASK | TT_PXN_MASK | TT_AF;
> > > PageAttributeMask = ~(TT_ADDRESS_MASK_BLOCK_ENTRY | TT_AP_MASK |
> > > - TT_PXN_MASK | TT_XN_MASK);
> > > + TT_PXN_MASK | TT_XN_MASK | TT_AF);
> > > }
> > >
> > > TranslationTable = ArmGetTTBR0BaseAddress ();
> > > --
> > > 2.20.1
> > >
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 4/5] ArmPkg/ArmMmuLib AARCH64: add support for read-only page tables
2019-01-07 7:14 [PATCH 0/5] memory/MMU hardening for AArch64 Ard Biesheuvel
` (2 preceding siblings ...)
2019-01-07 7:15 ` [PATCH 3/5] ArmPkg/ArmMmuLib AARCH64: implement support for EFI_MEMORY_RP permissions Ard Biesheuvel
@ 2019-01-07 7:15 ` Ard Biesheuvel
2019-01-07 7:15 ` [PATCH 5/5] ArmPkg/CpuDxe: switch to read-only page tables at EndOfDxe Ard Biesheuvel
4 siblings, 0 replies; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-07 7:15 UTC (permalink / raw)
To: edk2-devel
As a hardening measure, implement support for remapping all page
tables read-only at a certain point during the boot (end of DXE
is the most appropriate trigger).
This should make it a lot more difficult to take advantage of
write exploits to defeat authentication checks, since the
attacker can no longer manipulate the page tables directly.
To allow the page tables to still be manipulated, make use
of the existing code to manipulate live entries: this drops
into assembler with interrupts off, and disables the MMU for
a brief moment to avoid causing TLB conflicts. Since page
tables are writable with the MMU off, we can reuse this code
to still manipulate the page tables after we updated the CPU
mappings to be read-only.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
ArmPkg/Include/Library/ArmMmuLib.h | 6 +
ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 119 ++++++++++++++++++--
ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 8 ++
3 files changed, 123 insertions(+), 10 deletions(-)
diff --git a/ArmPkg/Include/Library/ArmMmuLib.h b/ArmPkg/Include/Library/ArmMmuLib.h
index d2725810f1c6..f0832b91bf17 100644
--- a/ArmPkg/Include/Library/ArmMmuLib.h
+++ b/ArmPkg/Include/Library/ArmMmuLib.h
@@ -70,4 +70,10 @@ ArmSetMemoryAttributes (
IN UINT64 Attributes
);
+VOID
+EFIAPI
+MapAllPageTablesReadOnly (
+ VOID
+ );
+
#endif
diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
index b59c081a7e49..cefaad9961ea 100644
--- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
+++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
@@ -28,6 +28,8 @@
// We use this index definition to define an invalid block entry
#define TT_ATTR_INDX_INVALID ((UINT32)~0)
+STATIC BOOLEAN mReadOnlyPageTables;
+
STATIC
UINT64
ArmMemoryAttributeToPageAttribute (
@@ -137,6 +139,9 @@ ReplaceLiveEntry (
IN UINT64 Address
)
{
+ if (*Entry == Value) {
+ return;
+ }
if (!ArmMmuEnabled ()) {
*Entry = Value;
} else {
@@ -181,7 +186,8 @@ GetBlockEntryListFromAddress (
IN UINT64 RegionStart,
OUT UINTN *TableLevel,
IN OUT UINT64 *BlockEntrySize,
- OUT UINT64 **LastBlockEntry
+ OUT UINT64 **LastBlockEntry,
+ OUT BOOLEAN *NewPageTablesAllocated
)
{
UINTN RootTableLevel;
@@ -292,6 +298,8 @@ GetBlockEntryListFromAddress (
return NULL;
}
+ *NewPageTablesAllocated = TRUE;
+
// Populate the newly created lower level table
SubTableBlockEntry = TranslationTable;
for (Index = 0; Index < TT_ENTRY_COUNT; Index++) {
@@ -316,10 +324,18 @@ GetBlockEntryListFromAddress (
return NULL;
}
+ *NewPageTablesAllocated = TRUE;
+
ZeroMem (TranslationTable, TT_ENTRY_COUNT * sizeof(UINT64));
// Fill the new BlockEntry with the TranslationTable
- *BlockEntry = ((UINTN)TranslationTable & TT_ADDRESS_MASK_DESCRIPTION_TABLE) | TT_TYPE_TABLE_ENTRY;
+ if (!mReadOnlyPageTables) {
+ *BlockEntry = (UINTN)TranslationTable | TT_TYPE_TABLE_ENTRY;
+ } else {
+ ReplaceLiveEntry (BlockEntry,
+ (UINTN)TranslationTable | TT_TYPE_TABLE_ENTRY,
+ RegionStart);
+ }
}
}
}
@@ -345,7 +361,8 @@ UpdateRegionMapping (
IN UINT64 RegionStart,
IN UINT64 RegionLength,
IN UINT64 Attributes,
- IN UINT64 BlockEntryMask
+ IN UINT64 BlockEntryMask,
+ OUT BOOLEAN *ReadOnlyRemapDone
)
{
UINT32 Type;
@@ -353,6 +370,7 @@ UpdateRegionMapping (
UINT64 *LastBlockEntry;
UINT64 BlockEntrySize;
UINTN TableLevel;
+ BOOLEAN NewPageTablesAllocated;
// Ensure the Length is aligned on 4KB boundary
if ((RegionLength == 0) || ((RegionLength & (SIZE_4KB - 1)) != 0)) {
@@ -360,11 +378,13 @@ UpdateRegionMapping (
return EFI_INVALID_PARAMETER;
}
+ NewPageTablesAllocated = FALSE;
do {
// Get the first Block Entry that matches the Virtual Address and also the information on the Table Descriptor
// such as the the size of the Block Entry and the address of the last BlockEntry of the Table Descriptor
BlockEntrySize = RegionLength;
- BlockEntry = GetBlockEntryListFromAddress (RootTable, RegionStart, &TableLevel, &BlockEntrySize, &LastBlockEntry);
+ BlockEntry = GetBlockEntryListFromAddress (RootTable, RegionStart,
+ &TableLevel, &BlockEntrySize, &LastBlockEntry, &NewPageTablesAllocated);
if (BlockEntry == NULL) {
// GetBlockEntryListFromAddress() return NULL when it fails to allocate new pages from the Translation Tables
return EFI_OUT_OF_RESOURCES;
@@ -378,10 +398,16 @@ UpdateRegionMapping (
do {
// Fill the Block Entry with attribute and output block address
- *BlockEntry &= BlockEntryMask;
- *BlockEntry |= (RegionStart & TT_ADDRESS_MASK_BLOCK_ENTRY) | Attributes | Type;
+ if (!mReadOnlyPageTables) {
+ *BlockEntry &= BlockEntryMask;
+ *BlockEntry |= (RegionStart & TT_ADDRESS_MASK_BLOCK_ENTRY) | Attributes | Type;
- ArmUpdateTranslationTableEntry (BlockEntry, (VOID *)RegionStart);
+ ArmUpdateTranslationTableEntry (BlockEntry, (VOID *)RegionStart);
+ } else {
+ ReplaceLiveEntry (BlockEntry,
+ (*BlockEntry & BlockEntryMask) | (RegionStart & TT_ADDRESS_MASK_BLOCK_ENTRY) | Attributes | Type,
+ RegionStart);
+ }
// Go to the next BlockEntry
RegionStart += BlockEntrySize;
@@ -397,9 +423,79 @@ UpdateRegionMapping (
} while ((RegionLength >= BlockEntrySize) && (BlockEntry <= LastBlockEntry));
} while (RegionLength != 0);
+ // if we have switched to read-only page tables, find the newly allocated ones
+ // and update their permissions
+ if (mReadOnlyPageTables && NewPageTablesAllocated) {
+ MapAllPageTablesReadOnly ();
+ if (ReadOnlyRemapDone) {
+ *ReadOnlyRemapDone = TRUE;
+ }
+ }
+
return EFI_SUCCESS;
}
+STATIC
+BOOLEAN
+EFIAPI
+MapPageTableReadOnlyRecursive (
+ IN UINT64 *RootTable,
+ IN UINT64 *TableEntry,
+ IN UINTN NumEntries,
+ IN UINTN TableLevel
+ )
+{
+ EFI_STATUS Status;
+ BOOLEAN Done;
+
+ //
+ // The UpdateRegionMapping () call in this function may recurse into
+ // MapAllPageTablesReadOnly () if it allocates any page tables. When
+ // this happens, there is little point in proceeding here, so let's
+ // bail early in that case.
+ //
+ Done = FALSE;
+ Status = UpdateRegionMapping (RootTable, (UINT64)TableEntry, EFI_PAGE_SIZE,
+ TT_AP_RO_RO, ~TT_ADDRESS_MASK_BLOCK_ENTRY, &Done);
+ ASSERT_EFI_ERROR (Status);
+
+ if (TableLevel == 3) {
+ return Done;
+ }
+
+ // go over the table and recurse for each table type entry
+ while (!Done && NumEntries--) {
+ if ((*TableEntry & TT_TYPE_MASK) == TT_TYPE_TABLE_ENTRY) {
+ Done = MapPageTableReadOnlyRecursive (RootTable,
+ (UINT64 *)(*TableEntry & TT_ADDRESS_MASK_DESCRIPTION_TABLE),
+ TT_ENTRY_COUNT, TableLevel + 1);
+ }
+ TableEntry++;
+ }
+ return Done;
+}
+
+VOID
+EFIAPI
+MapAllPageTablesReadOnly (
+ VOID
+ )
+{
+ UINTN T0SZ;
+ UINTN RootTableEntryCount;
+ UINTN RootLevel;
+ UINT64 *RootTable;
+
+ mReadOnlyPageTables = TRUE;
+
+ T0SZ = ArmGetTCR () & TCR_T0SZ_MASK;
+ GetRootTranslationTableInfo (T0SZ, &RootLevel, &RootTableEntryCount);
+ RootTable = ArmGetTTBR0BaseAddress ();
+
+ MapPageTableReadOnlyRecursive (RootTable, RootTable, RootTableEntryCount,
+ RootLevel);
+}
+
STATIC
EFI_STATUS
FillTranslationTable (
@@ -412,7 +508,8 @@ FillTranslationTable (
MemoryRegion->VirtualBase,
MemoryRegion->Length,
ArmMemoryAttributeToPageAttribute (MemoryRegion->Attributes) | TT_AF,
- 0
+ 0,
+ NULL
);
}
@@ -494,7 +591,8 @@ ArmSetMemoryAttributes (
BaseAddress,
Length,
PageAttributes,
- PageAttributeMask);
+ PageAttributeMask,
+ NULL);
if (EFI_ERROR (Status)) {
return Status;
}
@@ -516,7 +614,8 @@ SetMemoryRegionAttribute (
RootTable = ArmGetTTBR0BaseAddress ();
- Status = UpdateRegionMapping (RootTable, BaseAddress, Length, Attributes, BlockEntryMask);
+ Status = UpdateRegionMapping (RootTable, BaseAddress, Length, Attributes,
+ BlockEntryMask, NULL);
if (EFI_ERROR (Status)) {
return Status;
}
diff --git a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
index bffab83d4fd0..9a75026e2919 100644
--- a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
+++ b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
@@ -844,3 +844,11 @@ ArmMmuBaseLibConstructor (
{
return RETURN_SUCCESS;
}
+
+VOID
+EFIAPI
+MapAllPageTablesReadOnly (
+ VOID
+ )
+{
+}
--
2.20.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 5/5] ArmPkg/CpuDxe: switch to read-only page tables at EndOfDxe
2019-01-07 7:14 [PATCH 0/5] memory/MMU hardening for AArch64 Ard Biesheuvel
` (3 preceding siblings ...)
2019-01-07 7:15 ` [PATCH 4/5] ArmPkg/ArmMmuLib AARCH64: add support for read-only page tables Ard Biesheuvel
@ 2019-01-07 7:15 ` Ard Biesheuvel
4 siblings, 0 replies; 19+ messages in thread
From: Ard Biesheuvel @ 2019-01-07 7:15 UTC (permalink / raw)
To: edk2-devel
Register for the EndOfDxe event, and use it to invoke the new
ArmMmuLib code that remaps all page tables as read-only. This
should limit the impact of arbitrary write exploits, since they
can no longer be abused to modify tightened memory permissions.
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
ArmPkg/Drivers/CpuDxe/CpuDxe.c | 23 ++++++++++++++++++++
ArmPkg/Drivers/CpuDxe/CpuDxe.inf | 1 +
2 files changed, 24 insertions(+)
diff --git a/ArmPkg/Drivers/CpuDxe/CpuDxe.c b/ArmPkg/Drivers/CpuDxe/CpuDxe.c
index 5e923d45b715..11f4a2ccf5c8 100644
--- a/ArmPkg/Drivers/CpuDxe/CpuDxe.c
+++ b/ArmPkg/Drivers/CpuDxe/CpuDxe.c
@@ -238,6 +238,17 @@ InitializeDma (
CpuArchProtocol->DmaBufferAlignment = ArmCacheWritebackGranule ();
}
+STATIC
+VOID
+EFIAPI
+OnEndOfDxe (
+ IN EFI_EVENT Event,
+ IN VOID *Context
+ )
+{
+ MapAllPageTablesReadOnly ();
+}
+
EFI_STATUS
CpuDxeInitialize (
IN EFI_HANDLE ImageHandle,
@@ -246,6 +257,7 @@ CpuDxeInitialize (
{
EFI_STATUS Status;
EFI_EVENT IdleLoopEvent;
+ EFI_EVENT EndOfDxeEvent;
InitializeExceptions (&mCpu);
@@ -285,5 +297,16 @@ CpuDxeInitialize (
);
ASSERT_EFI_ERROR (Status);
+
+ Status = gBS->CreateEventEx (
+ EVT_NOTIFY_SIGNAL,
+ TPL_CALLBACK,
+ OnEndOfDxe,
+ NULL,
+ &gEfiEndOfDxeEventGroupGuid,
+ &EndOfDxeEvent
+ );
+ ASSERT_EFI_ERROR (Status);
+
return Status;
}
diff --git a/ArmPkg/Drivers/CpuDxe/CpuDxe.inf b/ArmPkg/Drivers/CpuDxe/CpuDxe.inf
index c32d2cb9c7d4..0788a2ab27c0 100644
--- a/ArmPkg/Drivers/CpuDxe/CpuDxe.inf
+++ b/ArmPkg/Drivers/CpuDxe/CpuDxe.inf
@@ -63,6 +63,7 @@
[Guids]
gEfiDebugImageInfoTableGuid
+ gEfiEndOfDxeEventGroupGuid
gArmMpCoreInfoGuid
gIdleLoopEventGuid
gEfiVectorHandoffTableGuid
--
2.20.1
^ permalink raw reply related [flat|nested] 19+ messages in thread