From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-x231.google.com (mail-wr0-x231.google.com [IPv6:2a00:1450:400c:c0c::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 5692B821EA for ; Thu, 2 Mar 2017 02:36:27 -0800 (PST) Received: by mail-wr0-x231.google.com with SMTP id l37so48840116wrc.1 for ; Thu, 02 Mar 2017 02:36:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=j/8+AoKb4bAx3+ZecY4BTi1i2oPBQLHxgD/4Cg0KLco=; b=Lh+MvLjWPsXpbtJAEG//mXWVKDEjM1pa3CIfBjaVsaNnjvTOD83aehGxssd1ZXSPwr 2f0EBmBgRnBwH0FU1I0tC4ZIDCd78tCqUHRmYjTEMsL2KNvddtie3yqVto7dPHmti8hO Rmk9Dttl01otjugaFK3MxGQ3F2lSrMJxnb4/8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=j/8+AoKb4bAx3+ZecY4BTi1i2oPBQLHxgD/4Cg0KLco=; b=BFGq08neR7maJ3RshW3nhQ6xUwQTPvRyZRH+ocENzE/U+JOUkmvI4/TSiLEf7YrAD2 qWZqepXcKGdHWUb/dX3bPzs8G0aHMcVa1K/sg26a0WwF+Aqr+w/gUuDeGGKzA74C7dzl uEtZYoXqrTaszTwqoZpjo6p2ISE8TI804/8j61aymwSOeXr6U4W4BzIcHB182x48wQMQ 6teEiP20XKSOT9J+uIX191ZkxiahRKzpMqsfJceN4Uoc2rgzYaR0iBFw6PAgy8DRjXrX BZb+SO50l4Yu0MH8+Rc2JQHdEMieQPVD71VhqFCyd9MhMPLLQtWEE8VVY1oytaWdAmCj qIiQ== X-Gm-Message-State: AMke39ktngtEHcI0Uk06aTc88CxehFObrOLNgo8NpV4zAy9loHWPndXXLKsbmx2wChktX+y8 X-Received: by 10.223.145.227 with SMTP id 90mr12172763wri.156.1488450985828; Thu, 02 Mar 2017 02:36:25 -0800 (PST) Received: from localhost.localdomain ([105.147.1.203]) by smtp.gmail.com with ESMTPSA id l138sm4306971wmd.7.2017.03.02.02.36.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 02 Mar 2017 02:36:25 -0800 (PST) From: Ard Biesheuvel To: edk2-devel@lists.01.org, leif.lindholm@linaro.org, lersek@redhat.com Cc: Ard Biesheuvel Date: Thu, 2 Mar 2017 10:36:13 +0000 Message-Id: <1488450976-16257-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488450976-16257-1-git-send-email-ard.biesheuvel@linaro.org> References: <1488450976-16257-1-git-send-email-ard.biesheuvel@linaro.org> Subject: [PATCH v2 1/4] ArmPkg/CpuDxe ARM: avoid splitting page table sections unnecessarily X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 Mar 2017 10:36:27 -0000 Currently, any range passed to CpuArchProtocol::SetMemoryAttributes is fully broken down into page mappings if the start or the size of the region happens to be misaliged relative to the section size of 1 MB. This is going to hurt when we enable strict memory permissions, given that we remap the entire RAM space non-executable (modulo the code bits) when the CpuArchProtocol is installed. So refactor the code to iterate over the range in a way that ensures that all naturally aligned section sized subregions are not broken up. Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Ard Biesheuvel --- ArmPkg/Drivers/CpuDxe/Arm/Mmu.c | 47 ++++++++++++++++---- 1 file changed, 39 insertions(+), 8 deletions(-) diff --git a/ArmPkg/Drivers/CpuDxe/Arm/Mmu.c b/ArmPkg/Drivers/CpuDxe/Arm/Mmu.c index 89e429925ba9..ce4d529bda67 100644 --- a/ArmPkg/Drivers/CpuDxe/Arm/Mmu.c +++ b/ArmPkg/Drivers/CpuDxe/Arm/Mmu.c @@ -679,6 +679,7 @@ SetMemoryAttributes ( ) { EFI_STATUS Status; + UINT64 ChunkLength; // // Ignore invocations that only modify permission bits @@ -687,14 +688,44 @@ SetMemoryAttributes ( return EFI_SUCCESS; } - if(((BaseAddress & 0xFFFFF) == 0) && ((Length & 0xFFFFF) == 0)) { - // Is the base and length a multiple of 1 MB? - DEBUG ((EFI_D_PAGE, "SetMemoryAttributes(): MMU section 0x%x length 0x%x to %lx\n", (UINTN)BaseAddress, (UINTN)Length, Attributes)); - Status = UpdateSectionEntries (BaseAddress, Length, Attributes, VirtualMask); - } else { - // Base and/or length is not a multiple of 1 MB - DEBUG ((EFI_D_PAGE, "SetMemoryAttributes(): MMU page 0x%x length 0x%x to %lx\n", (UINTN)BaseAddress, (UINTN)Length, Attributes)); - Status = UpdatePageEntries (BaseAddress, Length, Attributes, VirtualMask); + while (Length > 0) { + if ((BaseAddress % TT_DESCRIPTOR_SECTION_SIZE == 0) && + Length >= TT_DESCRIPTOR_SECTION_SIZE) { + + ChunkLength = Length - Length % TT_DESCRIPTOR_SECTION_SIZE; + + DEBUG ((DEBUG_PAGE, + "SetMemoryAttributes(): MMU section 0x%lx length 0x%lx to %lx\n", + BaseAddress, ChunkLength, Attributes)); + + Status = UpdateSectionEntries (BaseAddress, ChunkLength, Attributes, + VirtualMask); + } else { + + // + // Process page by page until the next section boundary, but only if + // we have more than a section's worth of area to deal with after that. + // + ChunkLength = TT_DESCRIPTOR_SECTION_SIZE - + (BaseAddress % TT_DESCRIPTOR_SECTION_SIZE); + if (ChunkLength + TT_DESCRIPTOR_SECTION_SIZE > Length) { + ChunkLength = Length; + } + + DEBUG ((DEBUG_PAGE, + "SetMemoryAttributes(): MMU page 0x%lx length 0x%lx to %lx\n", + BaseAddress, ChunkLength, Attributes)); + + Status = UpdatePageEntries (BaseAddress, ChunkLength, Attributes, + VirtualMask); + } + + if (EFI_ERROR (Status)) { + break; + } + + BaseAddress += ChunkLength; + Length -= ChunkLength; } // Flush d-cache so descriptors make it back to uncached memory for subsequent table walks -- 2.7.4