public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [PATCH 0/2] ArmPkg/ArmMmuLib ARM: simply cache invalidation of page tables
@ 2020-03-05 10:00 Ard Biesheuvel
  2020-03-05 10:00 ` [PATCH 1/2] ArmPkg/ArmMmuLib ARM: use AllocateAlignedPages() for alignment Ard Biesheuvel
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2020-03-05 10:00 UTC (permalink / raw)
  To: devel; +Cc: leif, Ard Biesheuvel

This is a followup to '[PATCH v2 4/9] ArmPkg/ArmMmuLib ARM: cache-invalidate
initial page table entries', and patch #2 of this series should be folded
into that.

Ard Biesheuvel (2):
  ArmPkg/ArmMmuLib ARM: use AllocateAlignedPages() for alignment
  ArmPkg/ArmMmuLib ARM: invalidate page tables as they are allocated

 ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 52 +++++++++++---------
 1 file changed, 28 insertions(+), 24 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] ArmPkg/ArmMmuLib ARM: use AllocateAlignedPages() for alignment
  2020-03-05 10:00 [PATCH 0/2] ArmPkg/ArmMmuLib ARM: simply cache invalidation of page tables Ard Biesheuvel
@ 2020-03-05 10:00 ` Ard Biesheuvel
  2020-03-05 10:00 ` [PATCH 2/2] ArmPkg/ArmMmuLib ARM: invalidate page tables as they are allocated Ard Biesheuvel
  2020-03-05 16:48 ` [PATCH 0/2] ArmPkg/ArmMmuLib ARM: simply cache invalidation of page tables Leif Lindholm
  2 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2020-03-05 10:00 UTC (permalink / raw)
  To: devel; +Cc: leif, Ard Biesheuvel

Instead of overallocating memory and align the resulting base address
manually, use the AllocateAlignedPages () helper, which achieves the
same, and might even manage that without leaking a chunk of memory of
the same size as the allocation itself.

While at it, fix up a variable declaration in the same hunk, and drop
a comment whose contents add nothing to the following line of code.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
index 7c7cad2c3d9d..0800ef560d89 100644
--- a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
+++ b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
@@ -138,8 +138,9 @@ PopulateLevel2PageTable (
       // Case where a virtual memory map descriptor overlapped a section entry
 
       // Allocate a Level2 Page Table for this Section
-      TranslationTable = (UINTN)AllocatePages(EFI_SIZE_TO_PAGES(TRANSLATION_TABLE_PAGE_SIZE + TRANSLATION_TABLE_PAGE_ALIGNMENT));
-      TranslationTable = ((UINTN)TranslationTable + TRANSLATION_TABLE_PAGE_ALIGNMENT_MASK) & ~TRANSLATION_TABLE_PAGE_ALIGNMENT_MASK;
+      TranslationTable = (UINTN)AllocateAlignedPages (
+                                  EFI_SIZE_TO_PAGES (TRANSLATION_TABLE_PAGE_SIZE),
+                                  TRANSLATION_TABLE_PAGE_ALIGNMENT);
 
       // Translate the Section Descriptor into Page Descriptor
       SectionDescriptor = TT_DESCRIPTOR_PAGE_TYPE_PAGE | ConvertSectionAttributesToPageAttributes (*SectionEntry, FALSE);
@@ -162,9 +163,9 @@ PopulateLevel2PageTable (
       return;
     }
   } else {
-    TranslationTable = (UINTN)AllocatePages(EFI_SIZE_TO_PAGES(TRANSLATION_TABLE_PAGE_SIZE + TRANSLATION_TABLE_PAGE_ALIGNMENT));
-    TranslationTable = ((UINTN)TranslationTable + TRANSLATION_TABLE_PAGE_ALIGNMENT_MASK) & ~TRANSLATION_TABLE_PAGE_ALIGNMENT_MASK;
-
+    TranslationTable = (UINTN)AllocateAlignedPages (
+                                EFI_SIZE_TO_PAGES (TRANSLATION_TABLE_PAGE_SIZE),
+                                TRANSLATION_TABLE_PAGE_ALIGNMENT);
     ZeroMem ((VOID *)TranslationTable, TRANSLATION_TABLE_PAGE_SIZE);
 
     *SectionEntry = (TranslationTable & TT_DESCRIPTOR_SECTION_PAGETABLE_ADDRESS_MASK) |
@@ -328,16 +329,16 @@ ArmConfigureMmu (
   OUT UINTN                         *TranslationTableSize OPTIONAL
   )
 {
-  VOID*                         TranslationTable;
+  VOID                          *TranslationTable;
   ARM_MEMORY_REGION_ATTRIBUTES  TranslationTableAttribute;
   UINT32                        TTBRAttributes;
 
-  // Allocate pages for translation table.
-  TranslationTable = AllocatePages (EFI_SIZE_TO_PAGES (TRANSLATION_TABLE_SECTION_SIZE + TRANSLATION_TABLE_SECTION_ALIGNMENT));
+  TranslationTable = AllocateAlignedPages (
+                       EFI_SIZE_TO_PAGES (TRANSLATION_TABLE_SECTION_SIZE),
+                       TRANSLATION_TABLE_SECTION_ALIGNMENT);
   if (TranslationTable == NULL) {
     return RETURN_OUT_OF_RESOURCES;
   }
-  TranslationTable = (VOID*)(((UINTN)TranslationTable + TRANSLATION_TABLE_SECTION_ALIGNMENT_MASK) & ~TRANSLATION_TABLE_SECTION_ALIGNMENT_MASK);
 
   if (TranslationTableBase != NULL) {
     *TranslationTableBase = TranslationTable;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] ArmPkg/ArmMmuLib ARM: invalidate page tables as they are allocated
  2020-03-05 10:00 [PATCH 0/2] ArmPkg/ArmMmuLib ARM: simply cache invalidation of page tables Ard Biesheuvel
  2020-03-05 10:00 ` [PATCH 1/2] ArmPkg/ArmMmuLib ARM: use AllocateAlignedPages() for alignment Ard Biesheuvel
@ 2020-03-05 10:00 ` Ard Biesheuvel
  2020-03-05 10:26   ` Ard Biesheuvel
  2020-03-05 16:48 ` [PATCH 0/2] ArmPkg/ArmMmuLib ARM: simply cache invalidation of page tables Leif Lindholm
  2 siblings, 1 reply; 5+ messages in thread
From: Ard Biesheuvel @ 2020-03-05 10:00 UTC (permalink / raw)
  To: devel; +Cc: leif, Ard Biesheuvel

Instead of performing two cache invalidations for each section entry
that gets updated, perform the first invalidation, which is intended
to clean the page tables from caches on systems where cache hits are
permitted with the MMU and caches off.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 33 +++++++++++---------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
index 0800ef560d89..11a1e704beab 100644
--- a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
+++ b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
@@ -147,6 +147,13 @@ PopulateLevel2PageTable (
 
       BaseSectionAddress = TT_DESCRIPTOR_SECTION_BASE_ADDRESS(*SectionEntry);
 
+      //
+      // Make sure we are not inadvertently hitting in the caches
+      // when populating the page tables
+      //
+      InvalidateDataCacheRange ((VOID *)TranslationTable,
+        TRANSLATION_TABLE_PAGE_SIZE);
+
       // Populate the new Level2 Page Table for the section
       PageEntry = (UINT32*)TranslationTable;
       for (Index = 0; Index < TRANSLATION_TABLE_PAGE_COUNT; Index++) {
@@ -166,6 +173,12 @@ PopulateLevel2PageTable (
     TranslationTable = (UINTN)AllocateAlignedPages (
                                 EFI_SIZE_TO_PAGES (TRANSLATION_TABLE_PAGE_SIZE),
                                 TRANSLATION_TABLE_PAGE_ALIGNMENT);
+    //
+    // Make sure we are not inadvertently hitting in the caches
+    // when populating the page tables
+    //
+    InvalidateDataCacheRange ((VOID *)TranslationTable,
+      TRANSLATION_TABLE_PAGE_SIZE);
     ZeroMem ((VOID *)TranslationTable, TRANSLATION_TABLE_PAGE_SIZE);
 
     *SectionEntry = (TranslationTable & TT_DESCRIPTOR_SECTION_PAGETABLE_ADDRESS_MASK) |
@@ -179,13 +192,6 @@ PopulateLevel2PageTable (
 
   ASSERT (FirstPageOffset + Pages <= TRANSLATION_TABLE_PAGE_COUNT);
 
-  //
-  // Invalidate once to prevent page table updates to hit in the
-  // caches inadvertently.
-  //
-  InvalidateDataCacheRange ((UINT32 *)TranslationTable + FirstPageOffset,
-    RemainLength / TT_DESCRIPTOR_PAGE_SIZE * sizeof (*PageEntry));
-
   for (Index = 0; Index < Pages; Index++) {
     *PageEntry++     =  TT_DESCRIPTOR_PAGE_BASE_ADDRESS(PhysicalBase) | PageAttributes;
     PhysicalBase += TT_DESCRIPTOR_PAGE_SIZE;
@@ -268,14 +274,6 @@ FillTranslationTable (
   SectionEntry    = TRANSLATION_TABLE_ENTRY_FOR_VIRTUAL_ADDRESS(TranslationTable, MemoryRegion->VirtualBase);
 
   while (RemainLength != 0) {
-    //
-    // Ensure that the assignment of the page table entry will not hit
-    // in the cache. Whether this could occur is IMPLEMENTATION DEFINED
-    // and thus permitted by the ARMv7 architecture.
-    //
-    ArmInvalidateDataCacheEntryByMVA ((UINTN)SectionEntry);
-    ArmDataSynchronizationBarrier ();
-
     if (PhysicalBase % TT_DESCRIPTOR_SECTION_SIZE == 0 &&
         RemainLength >= TT_DESCRIPTOR_SECTION_SIZE) {
       // Case: Physical address aligned on the Section Size (1MB) && the length
@@ -348,6 +346,11 @@ ArmConfigureMmu (
     *TranslationTableSize = TRANSLATION_TABLE_SECTION_SIZE;
   }
 
+  //
+  // Make sure we are not inadvertently hitting in the caches
+  // when populating the page tables
+  //
+  InvalidateDataCacheRange (TranslationTable, TRANSLATION_TABLE_SECTION_SIZE);
   ZeroMem (TranslationTable, TRANSLATION_TABLE_SECTION_SIZE);
 
   // By default, mark the translation table as belonging to a uncached region
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] ArmPkg/ArmMmuLib ARM: invalidate page tables as they are allocated
  2020-03-05 10:00 ` [PATCH 2/2] ArmPkg/ArmMmuLib ARM: invalidate page tables as they are allocated Ard Biesheuvel
@ 2020-03-05 10:26   ` Ard Biesheuvel
  0 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2020-03-05 10:26 UTC (permalink / raw)
  To: edk2-devel-groups-io; +Cc: Leif Lindholm

On Thu, 5 Mar 2020 at 11:00, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
> Instead of performing two cache invalidations for each section entry
> that gets updated, perform the first invalidation, which is intended
> to clean the page tables from caches on systems where cache hits are
> permitted with the MMU and caches off.
>

... as the page tables are allocated.

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 33 +++++++++++---------
>  1 file changed, 18 insertions(+), 15 deletions(-)
>
> diff --git a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
> index 0800ef560d89..11a1e704beab 100644
> --- a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
> +++ b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c
> @@ -147,6 +147,13 @@ PopulateLevel2PageTable (
>
>        BaseSectionAddress = TT_DESCRIPTOR_SECTION_BASE_ADDRESS(*SectionEntry);
>
> +      //
> +      // Make sure we are not inadvertently hitting in the caches
> +      // when populating the page tables
> +      //
> +      InvalidateDataCacheRange ((VOID *)TranslationTable,
> +        TRANSLATION_TABLE_PAGE_SIZE);
> +
>        // Populate the new Level2 Page Table for the section
>        PageEntry = (UINT32*)TranslationTable;
>        for (Index = 0; Index < TRANSLATION_TABLE_PAGE_COUNT; Index++) {
> @@ -166,6 +173,12 @@ PopulateLevel2PageTable (
>      TranslationTable = (UINTN)AllocateAlignedPages (
>                                  EFI_SIZE_TO_PAGES (TRANSLATION_TABLE_PAGE_SIZE),
>                                  TRANSLATION_TABLE_PAGE_ALIGNMENT);
> +    //
> +    // Make sure we are not inadvertently hitting in the caches
> +    // when populating the page tables
> +    //
> +    InvalidateDataCacheRange ((VOID *)TranslationTable,
> +      TRANSLATION_TABLE_PAGE_SIZE);
>      ZeroMem ((VOID *)TranslationTable, TRANSLATION_TABLE_PAGE_SIZE);
>
>      *SectionEntry = (TranslationTable & TT_DESCRIPTOR_SECTION_PAGETABLE_ADDRESS_MASK) |
> @@ -179,13 +192,6 @@ PopulateLevel2PageTable (
>
>    ASSERT (FirstPageOffset + Pages <= TRANSLATION_TABLE_PAGE_COUNT);
>
> -  //
> -  // Invalidate once to prevent page table updates to hit in the
> -  // caches inadvertently.
> -  //
> -  InvalidateDataCacheRange ((UINT32 *)TranslationTable + FirstPageOffset,
> -    RemainLength / TT_DESCRIPTOR_PAGE_SIZE * sizeof (*PageEntry));
> -
>    for (Index = 0; Index < Pages; Index++) {
>      *PageEntry++     =  TT_DESCRIPTOR_PAGE_BASE_ADDRESS(PhysicalBase) | PageAttributes;
>      PhysicalBase += TT_DESCRIPTOR_PAGE_SIZE;
> @@ -268,14 +274,6 @@ FillTranslationTable (
>    SectionEntry    = TRANSLATION_TABLE_ENTRY_FOR_VIRTUAL_ADDRESS(TranslationTable, MemoryRegion->VirtualBase);
>
>    while (RemainLength != 0) {
> -    //
> -    // Ensure that the assignment of the page table entry will not hit
> -    // in the cache. Whether this could occur is IMPLEMENTATION DEFINED
> -    // and thus permitted by the ARMv7 architecture.
> -    //
> -    ArmInvalidateDataCacheEntryByMVA ((UINTN)SectionEntry);
> -    ArmDataSynchronizationBarrier ();
> -
>      if (PhysicalBase % TT_DESCRIPTOR_SECTION_SIZE == 0 &&
>          RemainLength >= TT_DESCRIPTOR_SECTION_SIZE) {
>        // Case: Physical address aligned on the Section Size (1MB) && the length
> @@ -348,6 +346,11 @@ ArmConfigureMmu (
>      *TranslationTableSize = TRANSLATION_TABLE_SECTION_SIZE;
>    }
>
> +  //
> +  // Make sure we are not inadvertently hitting in the caches
> +  // when populating the page tables
> +  //
> +  InvalidateDataCacheRange (TranslationTable, TRANSLATION_TABLE_SECTION_SIZE);
>    ZeroMem (TranslationTable, TRANSLATION_TABLE_SECTION_SIZE);
>
>    // By default, mark the translation table as belonging to a uncached region
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/2] ArmPkg/ArmMmuLib ARM: simply cache invalidation of page tables
  2020-03-05 10:00 [PATCH 0/2] ArmPkg/ArmMmuLib ARM: simply cache invalidation of page tables Ard Biesheuvel
  2020-03-05 10:00 ` [PATCH 1/2] ArmPkg/ArmMmuLib ARM: use AllocateAlignedPages() for alignment Ard Biesheuvel
  2020-03-05 10:00 ` [PATCH 2/2] ArmPkg/ArmMmuLib ARM: invalidate page tables as they are allocated Ard Biesheuvel
@ 2020-03-05 16:48 ` Leif Lindholm
  2 siblings, 0 replies; 5+ messages in thread
From: Leif Lindholm @ 2020-03-05 16:48 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: devel

On Thu, Mar 05, 2020 at 11:00:28 +0100, Ard Biesheuvel wrote:
> This is a followup to '[PATCH v2 4/9] ArmPkg/ArmMmuLib ARM: cache-invalidate
> initial page table entries', and patch #2 of this series should be folded
> into that.

With the commit message fixup for 2/2 you have already identified
yourself - for the series:
Reviewed-by: Leif Lindholm <leif@nuviainc.com>


> Ard Biesheuvel (2):
>   ArmPkg/ArmMmuLib ARM: use AllocateAlignedPages() for alignment
>   ArmPkg/ArmMmuLib ARM: invalidate page tables as they are allocated
> 
>  ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 52 +++++++++++---------
>  1 file changed, 28 insertions(+), 24 deletions(-)
> 
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-03-05 16:48 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-03-05 10:00 [PATCH 0/2] ArmPkg/ArmMmuLib ARM: simply cache invalidation of page tables Ard Biesheuvel
2020-03-05 10:00 ` [PATCH 1/2] ArmPkg/ArmMmuLib ARM: use AllocateAlignedPages() for alignment Ard Biesheuvel
2020-03-05 10:00 ` [PATCH 2/2] ArmPkg/ArmMmuLib ARM: invalidate page tables as they are allocated Ard Biesheuvel
2020-03-05 10:26   ` Ard Biesheuvel
2020-03-05 16:48 ` [PATCH 0/2] ArmPkg/ArmMmuLib ARM: simply cache invalidation of page tables Leif Lindholm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox