From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ot1-f66.google.com (mail-ot1-f66.google.com [209.85.210.66]) by mx.groups.io with SMTP id smtpd.web09.15595.1583345582735178851 for ; Wed, 04 Mar 2020 10:13:02 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@linaro.org header.s=google header.b=WsjxlPYu; spf=pass (domain: linaro.org, ip: 209.85.210.66, mailfrom: ard.biesheuvel@linaro.org) Received: by mail-ot1-f66.google.com with SMTP id b3so2976450otp.4 for ; Wed, 04 Mar 2020 10:13:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Jhq9alQKsO07FcJ3/+eeO4qbEivLNknevBW+J/kTZ78=; b=WsjxlPYuWRV7IQX+0T99q2ljVF9sOjm2Ioww3IqBADgjeYLpc49LRD/QFiimKj7Hy/ IMR70+KGa7/lhZmUW1ypcftZbGOBYN5dNNn1kXD4odRcxmlvCWoar7UIM1Nyp3uOM548 WbjeYNeVB5KMzKapBCqNAKOd+XYeWmDcufcpl11hFyzAL53N9vdeZPWDLwG2DlBYgFla DzhOViPs/BGFzNv23j8jKb0esx+SrMN/ttcDRMMsuEptmJrhhAJ/8ZwzYfWEYsCdeWv3 Ft/+ZQUAFXvAAEhKqSNW2i3ZIp8pPj7le+sMoPYGG77aARRm5osW8k0Z5lKE9z5ArFU+ vskw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Jhq9alQKsO07FcJ3/+eeO4qbEivLNknevBW+J/kTZ78=; b=HNfpRJjgjNEoZPxwY8BmHkn6WPAXIlJgpz4LfOXpNGhIqIO96oUixlt7s15bwUP4yK 8MEIp59YjYvi4DfB5I4sC5xLBENXZ0oM4yOw9hUIpP1TG3QtOdlroudaTedsWIzyQ7sC ztpPOjs7YXa9lRgqRbMdWffrRhZ27T35D7YyiVz8o3rv2dU+UXyfY9ac7SSprAd9nnxm y6bpOmwoeEzJm9fyEMEFW+YZI9um3IWenmyZcu0aH+PfJxmJQKNsxqkVB/yJEq6RYwsj 4ty/Evo7XM9oohKtwp8UP66SgWblHB2shS8GGIha9+lEjaQLP8/kGTS9qeTcJaTSG9h6 zsbA== X-Gm-Message-State: ANhLgQ26awobKbbCDAtztFUPeS51WLkJ3DGrZEG2uijKIQ59yUUP+0o/ t4MUKajmSOeas8l9B4JETY8xUtSdMI831Q== X-Google-Smtp-Source: ADFU+vsYr2sZJZCkcuY92vdKeEERGzEt5VOWeyxHCQ5rHmwKoumW+VkJi5BukrlcXwArZkmLmLWY+A== X-Received: by 2002:a05:6830:1ca:: with SMTP id r10mr3294968ota.319.1583345581612; Wed, 04 Mar 2020 10:13:01 -0800 (PST) Return-Path: Received: from cam-smtp0.cambridge.arm.com ([2a01:cb1d:112:6f00:816e:ff0d:fb69:f613]) by smtp.gmail.com with ESMTPSA id p65sm9083971oif.47.2020.03.04.10.12.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Mar 2020 10:13:00 -0800 (PST) From: "Ard Biesheuvel" To: devel@edk2.groups.io Cc: leif@nuviainc.com, Ard Biesheuvel Subject: [PATCH v2 4/9] ArmPkg/ArmMmuLib ARM: cache-invalidate initial page table entries Date: Wed, 4 Mar 2020 19:12:41 +0100 Message-Id: <20200304181246.23513-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200304181246.23513-1-ard.biesheuvel@linaro.org> References: <20200304181246.23513-1-ard.biesheuvel@linaro.org> In the ARM version of ArmMmuLib, we are currently relying on set/way invalidation to ensure that the caches are in a consistent state with respect to main memory once we turn the MMU on. Even if set/way operations were the appropriate method to achieve this, doing an invalidate-all first and then populating the page table entries creates a window where page table entries could be loaded speculatively into the caches before we modify them, and shadow the new values that we write there. So let's get rid of the blanket clean/invalidate operations, and instead, invalidate each section entry before and after it is updated (to address all the little corner cases that the ARMv7 spec permits), and invalidate sets of level 2 entries in blocks, using the generic invalidation routine from CacheMaintenanceLib On ARMv7, cache maintenance may be required also when the MMU is enabled, in case the page table walker is not cache coherent. However, the code being updated here is guaranteed to run only when the MMU is still off, and so we can disregard the case when the MMU and caches are on. Since the MMU and D-cache are already off when we reach this point, we can drop the MMU and D-cache disables as well. Maintenance of the I-cache is unnecessary, since we are not modifying any code, and the installed mapping is guaranteed to be 1:1. This means we can also leave it enabled while the page table population code is running. Signed-off-by: Ard Biesheuvel --- ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c | 55 +++++++++++++++----- 1 file changed, 41 insertions(+), 14 deletions(-) diff --git a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c index aca7a37facac..7c7cad2c3d9d 100644 --- a/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c +++ b/ArmPkg/Library/ArmMmuLib/Arm/ArmMmuLibCore.c @@ -178,11 +178,25 @@ PopulateLevel2PageTable ( ASSERT (FirstPageOffset + Pages <= TRANSLATION_TABLE_PAGE_COUNT); + // + // Invalidate once to prevent page table updates to hit in the + // caches inadvertently. + // + InvalidateDataCacheRange ((UINT32 *)TranslationTable + FirstPageOffset, + RemainLength / TT_DESCRIPTOR_PAGE_SIZE * sizeof (*PageEntry)); + for (Index = 0; Index < Pages; Index++) { *PageEntry++ = TT_DESCRIPTOR_PAGE_BASE_ADDRESS(PhysicalBase) | PageAttributes; PhysicalBase += TT_DESCRIPTOR_PAGE_SIZE; } + // + // Invalidate again to ensure that any line fetches that may have occurred + // [speculatively] since the previous invalidate are evicted again. + // + ArmDataMemoryBarrier (); + InvalidateDataCacheRange ((UINT32 *)TranslationTable + FirstPageOffset, + RemainLength / TT_DESCRIPTOR_PAGE_SIZE * sizeof (*PageEntry)); } STATIC @@ -253,11 +267,28 @@ FillTranslationTable ( SectionEntry = TRANSLATION_TABLE_ENTRY_FOR_VIRTUAL_ADDRESS(TranslationTable, MemoryRegion->VirtualBase); while (RemainLength != 0) { + // + // Ensure that the assignment of the page table entry will not hit + // in the cache. Whether this could occur is IMPLEMENTATION DEFINED + // and thus permitted by the ARMv7 architecture. + // + ArmInvalidateDataCacheEntryByMVA ((UINTN)SectionEntry); + ArmDataSynchronizationBarrier (); + if (PhysicalBase % TT_DESCRIPTOR_SECTION_SIZE == 0 && RemainLength >= TT_DESCRIPTOR_SECTION_SIZE) { // Case: Physical address aligned on the Section Size (1MB) && the length // is greater than the Section Size - *SectionEntry++ = TT_DESCRIPTOR_SECTION_BASE_ADDRESS(PhysicalBase) | Attributes; + *SectionEntry = TT_DESCRIPTOR_SECTION_BASE_ADDRESS(PhysicalBase) | Attributes; + + // + // Issue a DMB to ensure that the page table entry update made it to + // memory before we issue the invalidate, otherwise, a subsequent + // speculative fetch could observe the old value. + // + ArmDataMemoryBarrier (); + ArmInvalidateDataCacheEntryByMVA ((UINTN)SectionEntry++); + PhysicalBase += TT_DESCRIPTOR_SECTION_SIZE; RemainLength -= TT_DESCRIPTOR_SECTION_SIZE; } else { @@ -267,9 +298,17 @@ FillTranslationTable ( // Case: Physical address aligned on the Section Size (1MB) && the length // does not fill a section // Case: Physical address NOT aligned on the Section Size (1MB) - PopulateLevel2PageTable (SectionEntry++, PhysicalBase, PageMapLength, + PopulateLevel2PageTable (SectionEntry, PhysicalBase, PageMapLength, MemoryRegion->Attributes); + // + // Issue a DMB to ensure that the page table entry update made it to + // memory before we issue the invalidate, otherwise, a subsequent + // speculative fetch could observe the old value. + // + ArmDataMemoryBarrier (); + ArmInvalidateDataCacheEntryByMVA ((UINTN)SectionEntry++); + // If it is the last entry if (RemainLength < TT_DESCRIPTOR_SECTION_SIZE) { break; @@ -349,18 +388,6 @@ ArmConfigureMmu ( } } - ArmCleanInvalidateDataCache (); - ArmInvalidateInstructionCache (); - - ArmDisableDataCache (); - ArmDisableInstructionCache(); - // TLBs are also invalidated when calling ArmDisableMmu() - ArmDisableMmu (); - - // Make sure nothing sneaked into the cache - ArmCleanInvalidateDataCache (); - ArmInvalidateInstructionCache (); - ArmSetTTBR0 ((VOID *)(UINTN)(((UINTN)TranslationTable & ~TRANSLATION_TABLE_SECTION_ALIGNMENT_MASK) | (TTBRAttributes & 0x7F))); // -- 2.17.1