From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-x22d.google.com (mail-wr0-x22d.google.com [IPv6:2a00:1450:400c:c0c::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 6FF62821F0 for ; Thu, 2 Mar 2017 02:36:29 -0800 (PST) Received: by mail-wr0-x22d.google.com with SMTP id u108so48836024wrb.3 for ; Thu, 02 Mar 2017 02:36:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bAt+sSWWKR/8da7JOOGSwMRyRFvzANtyenwx9PD79EI=; b=FoUBEGHAwDKgl+uKFNWk3Ej7BI/fc6rvJd/I7E7cvKeSpEQe+PGo5Smp4GFAtsTvyW ksLqJslADAOxrCLjaI+EMhLZEYS+XzQaYYqCdEUjG51fhrK6aSBZ0liF8pdb5crbjOJG E4DSSvgYYRZ8HTSsqTMG8uTkjeDLyHNLCSTnA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bAt+sSWWKR/8da7JOOGSwMRyRFvzANtyenwx9PD79EI=; b=DWsk2bxXeTTxleLCnMzmbr9/PzsXYzhcXHm13Hj30Oo+R+F0yJAB1HXK+ZF979Sc/g g1gaskyK/4MCwraX47GivpQmIbIKWjHiCAEszZP2NQYHLwxMGCw+abGvrJpEzqKqPqdN DrB/NkYWCu90Uzpa4oCe7LspBA0yF8kPD6YSoZza84v1hcFtue4zO7/w04VfeEyOgZo8 WipklimytDHXC+xT7S5CJ6pYsiLuyDoXc8Xsiy8qAfY7c8qFMkIRoF/09H0BC6/QzfN/ Tsi+3SppGkOmysMNuvwGI3xhAURlSfa5b8RNH80VLY46uVb6lQgU6W42KMY6DuKI/Wnu 2OCA== X-Gm-Message-State: AMke39lxTbD7b7AAvtlduzh/Q5qCW544VFbqI/b5kZMQeyt9nBPmqw4Apks6r2+sZHQMb/RS X-Received: by 10.223.167.71 with SMTP id e7mr11903006wrd.154.1488450987980; Thu, 02 Mar 2017 02:36:27 -0800 (PST) Received: from localhost.localdomain ([105.147.1.203]) by smtp.gmail.com with ESMTPSA id l138sm4306971wmd.7.2017.03.02.02.36.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 02 Mar 2017 02:36:27 -0800 (PST) From: Ard Biesheuvel To: edk2-devel@lists.01.org, leif.lindholm@linaro.org, lersek@redhat.com Cc: Ard Biesheuvel Date: Thu, 2 Mar 2017 10:36:14 +0000 Message-Id: <1488450976-16257-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488450976-16257-1-git-send-email-ard.biesheuvel@linaro.org> References: <1488450976-16257-1-git-send-email-ard.biesheuvel@linaro.org> Subject: [PATCH v2 2/4] ArmPkg/CpuDxe ARM: avoid unnecessary cache/TLB maintenance X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 Mar 2017 10:36:29 -0000 Page and section entries in the page tables are updated using the helper ArmUpdateTranslationTableEntry(), which cleans the page table entry to the PoC, and invalidates the TLB entry covering the page described by the entry being updated. Since we may be updating section entries, we might be leaving stale TLB entries at this point (for all pages in the section except the first one), which will be invalidated wholesale at the end of SetMemoryAttributes(). At that point, all caches are cleaned *and* invalidated as well. This cache maintenance is costly and unnecessary. The TLB maintenance is only necessary if we updated any section entries, since any page by page entries that have been updated will have been invalidated individually by ArmUpdateTranslationTableEntry(). So drop the clean/invalidate of the caches, and only perform the full TLB flush if UpdateSectionEntries() was called, or if sections were split by UpdatePageEntries(). Finally, make the cache maintenance on the remapped regions themselves conditional on whether any memory type attributes were modified. Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Ard Biesheuvel --- ArmPkg/Drivers/CpuDxe/Arm/Mmu.c | 60 +++++++++++--------- 1 file changed, 34 insertions(+), 26 deletions(-) diff --git a/ArmPkg/Drivers/CpuDxe/Arm/Mmu.c b/ArmPkg/Drivers/CpuDxe/Arm/Mmu.c index ce4d529bda67..26b637e7658f 100644 --- a/ArmPkg/Drivers/CpuDxe/Arm/Mmu.c +++ b/ArmPkg/Drivers/CpuDxe/Arm/Mmu.c @@ -347,10 +347,11 @@ SyncCacheConfig ( EFI_STATUS UpdatePageEntries ( - IN EFI_PHYSICAL_ADDRESS BaseAddress, - IN UINT64 Length, - IN UINT64 Attributes, - IN EFI_PHYSICAL_ADDRESS VirtualMask + IN EFI_PHYSICAL_ADDRESS BaseAddress, + IN UINT64 Length, + IN UINT64 Attributes, + IN EFI_PHYSICAL_ADDRESS VirtualMask, + OUT BOOLEAN *FlushTlbs ) { EFI_STATUS Status; @@ -446,6 +447,9 @@ UpdatePageEntries ( // Re-read descriptor Descriptor = FirstLevelTable[FirstLevelIdx]; + if (FlushTlbs != NULL) { + *FlushTlbs = TRUE; + } } // Obtain page table base address @@ -471,15 +475,16 @@ UpdatePageEntries ( if (CurrentPageTableEntry != PageTableEntry) { Mva = (VOID *)(UINTN)((((UINTN)FirstLevelIdx) << TT_DESCRIPTOR_SECTION_BASE_SHIFT) + (PageTableIndex << TT_DESCRIPTOR_PAGE_BASE_SHIFT)); - if ((CurrentPageTableEntry & TT_DESCRIPTOR_PAGE_CACHEABLE_MASK) == TT_DESCRIPTOR_PAGE_CACHEABLE_MASK) { - // The current section mapping is cacheable so Clean/Invalidate the MVA of the page - // Note assumes switch(Attributes), not ARMv7 possibilities - WriteBackInvalidateDataCacheRange (Mva, TT_DESCRIPTOR_PAGE_SIZE); - } // Only need to update if we are changing the entry PageTable[PageTableIndex] = PageTableEntry; ArmUpdateTranslationTableEntry ((VOID *)&PageTable[PageTableIndex], Mva); + + // Clean/invalidate the cache for this page, but only + // if we are modifying the memory type attributes + if (((CurrentPageTableEntry ^ PageTableEntry) & TT_DESCRIPTOR_PAGE_CACHE_POLICY_MASK) != 0) { + WriteBackInvalidateDataCacheRange (Mva, TT_DESCRIPTOR_PAGE_SIZE); + } } Status = EFI_SUCCESS; @@ -581,7 +586,12 @@ UpdateSectionEntries ( // has this descriptor already been coverted to pages? if (TT_DESCRIPTOR_SECTION_TYPE_IS_PAGE_TABLE(CurrentDescriptor)) { // forward this 1MB range to page table function instead - Status = UpdatePageEntries ((FirstLevelIdx + i) << TT_DESCRIPTOR_SECTION_BASE_SHIFT, TT_DESCRIPTOR_SECTION_SIZE, Attributes, VirtualMask); + Status = UpdatePageEntries ( + (FirstLevelIdx + i) << TT_DESCRIPTOR_SECTION_BASE_SHIFT, + TT_DESCRIPTOR_SECTION_SIZE, + Attributes, + VirtualMask, + NULL); } else { // still a section entry @@ -596,15 +606,16 @@ UpdateSectionEntries ( if (CurrentDescriptor != Descriptor) { Mva = (VOID *)(UINTN)(((UINTN)FirstLevelTable) << TT_DESCRIPTOR_SECTION_BASE_SHIFT); - if ((CurrentDescriptor & TT_DESCRIPTOR_SECTION_CACHEABLE_MASK) == TT_DESCRIPTOR_SECTION_CACHEABLE_MASK) { - // The current section mapping is cacheable so Clean/Invalidate the MVA of the section - // Note assumes switch(Attributes), not ARMv7 possabilities - WriteBackInvalidateDataCacheRange (Mva, SIZE_1MB); - } // Only need to update if we are changing the descriptor FirstLevelTable[FirstLevelIdx + i] = Descriptor; ArmUpdateTranslationTableEntry ((VOID *)&FirstLevelTable[FirstLevelIdx + i], Mva); + + // Clean/invalidate the cache for this section, but only + // if we are modifying the memory type attributes + if (((CurrentDescriptor ^ Descriptor) & TT_DESCRIPTOR_SECTION_CACHE_POLICY_MASK) != 0) { + WriteBackInvalidateDataCacheRange (Mva, SIZE_1MB); + } } Status = EFI_SUCCESS; @@ -680,6 +691,7 @@ SetMemoryAttributes ( { EFI_STATUS Status; UINT64 ChunkLength; + BOOLEAN FlushTlbs; // // Ignore invocations that only modify permission bits @@ -688,6 +700,7 @@ SetMemoryAttributes ( return EFI_SUCCESS; } + FlushTlbs = FALSE; while (Length > 0) { if ((BaseAddress % TT_DESCRIPTOR_SECTION_SIZE == 0) && Length >= TT_DESCRIPTOR_SECTION_SIZE) { @@ -700,6 +713,8 @@ SetMemoryAttributes ( Status = UpdateSectionEntries (BaseAddress, ChunkLength, Attributes, VirtualMask); + + FlushTlbs = TRUE; } else { // @@ -717,7 +732,7 @@ SetMemoryAttributes ( BaseAddress, ChunkLength, Attributes)); Status = UpdatePageEntries (BaseAddress, ChunkLength, Attributes, - VirtualMask); + VirtualMask, &FlushTlbs); } if (EFI_ERROR (Status)) { @@ -728,16 +743,9 @@ SetMemoryAttributes ( Length -= ChunkLength; } - // Flush d-cache so descriptors make it back to uncached memory for subsequent table walks - // flush and invalidate pages - //TODO: Do we really need to invalidate the caches everytime we change the memory attributes ? - ArmCleanInvalidateDataCache (); - - ArmInvalidateInstructionCache (); - - // Invalidate all TLB entries so changes are synced - ArmInvalidateTlb (); - + if (FlushTlbs) { + ArmInvalidateTlb (); + } return Status; } -- 2.7.4