From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail02.groups.io (mail02.groups.io [66.175.222.108]) by spool.mail.gandi.net (Postfix) with ESMTPS id 3D44474003A for ; Thu, 1 Feb 2024 21:04:42 +0000 (UTC) DKIM-Signature: a=rsa-sha256; bh=9aSz14gY8Lm1dr/v7S3SDAT3RS1CsbUUdeQarocwyVM=; c=relaxed/simple; d=groups.io; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From:In-Reply-To:Precedence:List-Subscribe:List-Help:Sender:List-Id:Mailing-List:Delivered-To:Reply-To:List-Unsubscribe-Post:List-Unsubscribe:Content-Language:Content-Type:Content-Transfer-Encoding; s=20140610; t=1706821480; v=1; b=dXSZNb2GXvOlsi1WRbRzgwGiVKATmWmyFvS1MQPa6XX0Ys3rFKvzLbGMq9qPBUNiQeq8qoSu 6Ue4Dwty/UYpwJWNisAF+ft7Aez9mXne0sFwazbcdMIijqqjpUKvBrIhY0xVl2wzbw8oR7UYz6x t8xydMYN8BgJp+BAdet+PLQE= X-Received: by 127.0.0.2 with SMTP id CWHuYY7687511xA5PwN1wGeR; Thu, 01 Feb 2024 13:04:40 -0800 X-Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.groups.io with SMTP id smtpd.web11.7207.1706821480093102142 for ; Thu, 01 Feb 2024 13:04:40 -0800 X-Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-171-aKPC6xUgPuSqdyAwJLPwxQ-1; Thu, 01 Feb 2024 16:04:36 -0500 X-MC-Unique: aKPC6xUgPuSqdyAwJLPwxQ-1 X-Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3084C3C025CC; Thu, 1 Feb 2024 21:04:35 +0000 (UTC) X-Received: from [10.39.192.71] (unknown [10.39.192.71]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3F684200A08E; Thu, 1 Feb 2024 21:04:34 +0000 (UTC) Message-ID: <0774d00e-dfc8-0325-4b7d-4f46e86431b7@redhat.com> Date: Thu, 1 Feb 2024 22:04:28 +0100 MIME-Version: 1.0 Subject: Re: [edk2-devel] [PATCH 2/3] OvmfPkg/PlatformPei: rewrite page table calculation To: Gerd Hoffmann Cc: devel@edk2.groups.io, Oliver Steffen , Jiewen Yao , Ard Biesheuvel References: <20240131120000.358090-1-kraxel@redhat.com> <20240131120000.358090-3-kraxel@redhat.com> From: "Laszlo Ersek" In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Precedence: Bulk List-Subscribe: List-Help: Sender: devel@edk2.groups.io List-Id: Mailing-List: list devel@edk2.groups.io; contact devel+owner@edk2.groups.io Reply-To: devel@edk2.groups.io,lersek@redhat.com List-Unsubscribe-Post: List-Unsubscribe=One-Click List-Unsubscribe: X-Gm-Message-State: Q1Hc1eWIJQnlyAaZoqzExBApx7686176AA= Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-GND-Status: LEGIT Authentication-Results: spool.mail.gandi.net; dkim=pass header.d=groups.io header.s=20140610 header.b=dXSZNb2G; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=redhat.com (policy=none); spf=pass (spool.mail.gandi.net: domain of bounce@groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce@groups.io On 1/31/24 17:28, Gerd Hoffmann wrote: > On Wed, Jan 31, 2024 at 04:13:24PM +0100, Laszlo Ersek wrote: >> On 1/31/24 12:59, Gerd Hoffmann wrote: >>> Consider 5-level paging. Simplify calculation to make it easier to >>> understand. The new calculation is not 100% exact, but we only need >>> a rough estimate to reserve enough memory. >>> >>> Signed-off-by: Gerd Hoffmann >>> --- >>> OvmfPkg/PlatformPei/MemDetect.c | 42 ++++++++++++++++++--------------- >>> 1 file changed, 23 insertions(+), 19 deletions(-) >> >> The cover letter should have explained that this series depends on the >> 5-level paging series -- or else, this one should be appended to that >> series. >> >> With no on-list connection between them, it's a mess for me to keep >> track of which one to merge first. >=20 > There is no hard dependency between the two, it doesn't matter much > which is merged first. The connection between the two series is that > for guests with alot of memory you'll need both. Alot means hundreds > of TB, exceeding the address space which 4-level paging can handle, > so the TotalPages calculation done by the old code is *significantly* > off (more than just the one extra page for the 5th level). >=20 >> (1) The wording is difficult to follow though, for me anyway. How about >> this: >> >> ------- >> - A 4KB page accommodates the least significant 12 bits of the virtual >> address. >> - A page table entry at any level consumes 8 bytes, so a 4KB page table >> page (at any level) contains 512 entries, and accommodates 9 bits of the >> virtual address. >> - we minimally cover the phys address space with 2MB pages, so level 1 >> never exists. >> - If 1G paging is available, then level 2 doesn't exist either. >> - Start with level 2, where a page table page accommodates 9 + 9 + 12 = =3D >> 30 bits of the virtual address (and covers 1GB of physical address space= ). >> ------- >> >> If you think this isn't any easier to follow, then feel free to stick >> with your description. >=20 > I happily go with your more verbose version. >=20 >> (3) I'm sorry, these +1 additions *really* annoy me, not to mention the >> fact that we *include* those increments in the further shifting. Can we = do: >> >> UINT64 End; >> UINT64 Level2Pages, Level3Pages, Level4Pages, Level5Pages; >> >> End =3D 1LLU << PlatformInfoHob->PhysMemAddressWidth; >> Level2Pages =3D Page1GSupport ? 0LLU : End >> 30; >> Level3Pages =3D MAX (End >> 39, 1LLU); >> Level4Pages =3D MAX (End >> 48, 1LLU); >> Level5Pages =3D 1; >> >> This doesn't seem any more complicated, and it's exact, I believe. >=20 > Looks good, I'll take it, thanks alot. >=20 >>> ASSERT (TotalPages <=3D 0x40201); >> >> (4) The ASSERT() is no longer correct, for two reasons: (a) it does not >> consider 5-level paging, (b) the calculation from the patch is not exact >> anyway. >> >> But, I think the method I'm proposing should be exact (discounting that >> with 5-level paging unavailable, Level5Pages should be set to zero). >> >> Assuming PhysMemAddressWidth is 57, and 1GB pages are not supported, we = get: >> >> Level2Pages =3D BIT27; >> Level3Pages =3D BIT18; >> Level4Pages =3D BIT9; >> Level5Pages =3D BIT0; >> >> therefore >> >> ASSERT (TotalPages <=3D 0x8040201); >=20 > Ah, *this* is how this constant was calculated. >=20 >> in other words, we only need to add BIT27 to the existing constant >> 0x40201, in the ASSERT(). >=20 > With 1GB pages Level2Pages will be zero, so 0x40201 is correct in that > case. Right! :) >=20 > Without 1GB pages OVMF will use at most PhysMemAddressWidth =3D 40 (1TB), > so: >=20 > Level2Pages =3D BIT10; > Level3Pages =3D BIT1; > Level4Pages =3D 1; > Level5Pages =3D 1; > -> SUM =3D 0x404; >=20 > Which is smaller than 0x40201. So the ASSERT happens to be correct. > Which makes sense. The max page table tree is identical for 4-level > paging with 2M pages and 5-level paging with 1G pages. I was amazed to find that, myself; but exactly as you explain, in retrospect, it's "obvious". :) It just feels nice that we keep the original "large page" tree, just shift the leaf granularity by 9 bits! >=20 > I'll add a comment explaining this. Thanks! Laszlo -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#114963): https://edk2.groups.io/g/devel/message/114963 Mute This Topic: https://groups.io/mt/104073300/7686176 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io] -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-