From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by mx.groups.io with SMTP id smtpd.web10.5341.1675879105782325376 for ; Wed, 08 Feb 2023 09:58:25 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=mpbfINJh; spf=pass (domain: kernel.org, ip: 139.178.84.217, mailfrom: ardb@kernel.org) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2F6FF6171A; Wed, 8 Feb 2023 17:58:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C355C4339B; Wed, 8 Feb 2023 17:58:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1675879104; bh=ThegtgmvL/cC0R/uRYOMuTiEK2XZWdLtfD4OxLODS6c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mpbfINJhUo7m4/tXm+kR6DCBa5yofbq2ANvVEnm6fwcWW6XiOKurkozgwV6ScaZQp oNWW3Vbm9W8C8c0amJjazQGGr+JE2VZtn/D41olyXE1BGFed78PLrnAwPGKG1JGDNR A9KpkKBKJzMVyXWRYpnndQzEA/sKo5hwBcTMsqNbXyWPjNGIjbsI85GaC/NY+Xio5R 8ndkPkZSuH5JtEvZRDzRGCz3+H8/d8EsngaYkgxN5VZOly7OsetB63fUWlP4sIzCdD nVh7JzVgFWA3i5MnSiIyuALe/8NGOxI9hGw4ItCPMsYGRM0tXU4/XmRDuOXXqDG8bm YuXNolf3JQpVw== From: "Ard Biesheuvel" To: devel@edk2.groups.io Cc: Ard Biesheuvel , Michael Kinney , Liming Gao , Jiewen Yao , Michael Kubacki , Sean Brogan , Rebecca Cran , Leif Lindholm , Sami Mujawar , Taylor Beebe , =?UTF-8?q?Marvin=20H=C3=A4user?= Subject: [PATCH 1/3] ArmPkg/ArmMmuLib: Avoid splitting block entries if possible Date: Wed, 8 Feb 2023 18:58:10 +0100 Message-Id: <20230208175812.700129-2-ardb@kernel.org> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230208175812.700129-1-ardb@kernel.org> References: <20230208175812.700129-1-ardb@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Currently, the AArch64 MMU page table logic will break down any block entry that overlaps with the region being mapped, even if the block entry in question is using the same attributes as the new region. This means that creating a non-executable mapping inside a region that is already mapped non-executable at a coarser granularity may trigger a call to AllocatePages (), which may recurse back into the page table code to update the attributes on the newly allocated page tables. Let's avoid this, by preserving the block entry if it already covers the region being mapped with the correct attributes. Signed-off-by: Ard Biesheuvel --- ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c b/ArmPkg/Libr= ary/ArmMmuLib/AArch64/ArmMmuLibCore.c index 1cf8dc090012..28191938aeb1 100644 --- a/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c +++ b/ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c @@ -251,6 +251,15 @@ UpdateRegionMappingRecursive ( ASSERT (Level < 3);=0D =0D if (!IsTableEntry (*Entry, Level)) {=0D + //=0D + // If the region we are trying to map is already covered by a bloc= k=0D + // entry with the right attributes, don't bother splitting it up.= =0D + //=0D + if (IsBlockEntry (*Entry, Level) &&=0D + ((*Entry & TT_ATTRIBUTES_MASK & ~AttributeClearMask) =3D=3D At= tributeSetMask)) {=0D + continue;=0D + }=0D +=0D //=0D // No table entry exists yet, so we need to allocate a page table= =0D // for the next level.=0D --=20 2.39.1