From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail02.groups.io (mail02.groups.io [66.175.222.108]) by spool.mail.gandi.net (Postfix) with ESMTPS id 4C67078003C for ; Tue, 16 Jan 2024 04:46:45 +0000 (UTC) DKIM-Signature: a=rsa-sha256; bh=6dNPjpFADkuDPqYKGNWJAUWkffOR29r88oeRmfdJvkg=; c=relaxed/simple; d=groups.io; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From:In-Reply-To:Precedence:List-Subscribe:List-Help:Sender:List-Id:Mailing-List:Delivered-To:Reply-To:List-Unsubscribe-Post:List-Unsubscribe:Content-Language:Content-Type:Content-Transfer-Encoding; s=20140610; t=1705380404; v=1; b=Iq1fJHGxhsnQncpE8sXHgxQNNFnwPNzS5kz3yPsK+c3jexAptjWbQEfAKEIR5aMrP71I+uso LFrz9l03G1dXwklvCFnUVk87xDAoVo815gv9n72ZVaOxXN2V85MtYTeMeXX6iegGJUcn7vQQlQ9 kAB8uVtv3SHuSlmBg72cdwtQ= X-Received: by 127.0.0.2 with SMTP id 2nvFYY7687511xoJ7np8aWdP; Mon, 15 Jan 2024 20:46:44 -0800 X-Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.groups.io with SMTP id smtpd.web11.88521.1705340404069835114 for ; Mon, 15 Jan 2024 09:40:04 -0800 X-Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-616-jPLpfIjXO6KR2rdXTQekug-1; Mon, 15 Jan 2024 12:39:59 -0500 X-MC-Unique: jPLpfIjXO6KR2rdXTQekug-1 X-Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5402C811E9C; Mon, 15 Jan 2024 17:39:59 +0000 (UTC) X-Received: from [10.39.193.170] (unknown [10.39.193.170]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0D75810800; Mon, 15 Jan 2024 17:39:57 +0000 (UTC) Message-ID: <30871515-6b4f-fd6c-5b56-d911d5f15463@redhat.com> Date: Mon, 15 Jan 2024 18:39:56 +0100 MIME-Version: 1.0 Subject: Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM To: devel@edk2.groups.io, tbarrett1200@gmail.com Cc: Anatol Belski , Ard Biesheuvel , Gerd Hoffmann , Jianyong Wu , Jiewen Yao , Rob Bradford References: From: "Laszlo Ersek" In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Precedence: Bulk List-Subscribe: List-Help: Sender: devel@edk2.groups.io List-Id: Mailing-List: list devel@edk2.groups.io; contact devel+owner@edk2.groups.io Reply-To: devel@edk2.groups.io,lersek@redhat.com List-Unsubscribe-Post: List-Unsubscribe=One-Click List-Unsubscribe: X-Gm-Message-State: 6s72HiLm3g7aFpG4qTOTMLhnx7686176AA= Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-GND-Status: LEGIT Authentication-Results: spool.mail.gandi.net; dkim=pass header.d=groups.io header.s=20140610 header.b=Iq1fJHGx; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=redhat.com (policy=none); spf=pass (spool.mail.gandi.net: domain of bounce@groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce@groups.io On 1/12/24 19:31, Thomas Barrett wrote: > This series adds support for cloud-hypervisor guests with >1TiB > of RAM to Ovmf. This bug was solved for Qemu back in 2017 > https://bugzilla.redhat.com/show_bug.cgi?id=3D1468526. The bug is > still occuring for CloudHv guests because the PlatformScanE820 > utility is not currently supported for CloudHv. >=20 > My working branch for these changes can be found here: > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram >=20 > Cc: Anatol Belski > Cc: Ard Biesheuvel > Cc: Gerd Hoffmann > Cc: Jianyong Wu > Cc: Jiewen Yao > Cc: Laszlo Ersek > Cc: Rob Bradford >=20 > Thomas Barrett (3): > OvmfPkg: Add CloudHv support to PlatformScanE820 utility function. > OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv > OvmfPkg: CloudHv: Enable PcdUse1GPageTable >=20 > OvmfPkg/CloudHv/CloudHvX64.dsc | 2 + > OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------ > 2 files changed, 79 insertions(+), 30 deletions(-) >=20 Thanks for posting v3, this one looks well-formed, so I can make some comments. TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID branches introduced to PlatformInitLib. OVMF supports multiple hypervisors, and the general idea has been that a single module (or even a single platform DSC) should preferably not attempt to support multiple hypervisors. That's why we have a separate Xen platform, and a big number of Xen-customized, standalone modules. The idea with this is that any particular developer is very unlikely to develop for, and test on, multiple hypervisors. Therefore unification (=3D elimination of all possible code duplication) between distinct hypervisor code snippets is actually detrimental for maintenance, because it technically enables a developer to regress a platform that they never actively work with. I believe bhyve is similarly separated out (just like Xen). It's one thing to collect support for multiple board types (machine types) for QEMU into a given module; that's still the same hypervisor -- testing only requires a different "-M" option on the qemu command line. But fusing Cloud Hypervisor code with QEMU code makes me fidget in my seat. I've now grepped the OvmfPkg directory tree for existent instances of CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has seeped into a whole bunch of modules that should only be QEMU. If you need those modules customized for CloudHv, it'd be best to detach them for CloudHv, and eliminate the dead code (such as anything that depends on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv. Both communities will benefit. Again, this is all based on the empirical fact that QEMU and CloudHv developers don't tend to cross-develop. I understand that it's always just a small addition; in each specific case, it's just one or two more "if"s in common code. But the end result is terrible to maintain in the long term. Of course this all depends on having a separate platform DSC for CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc". So I'd suggest (a) isolating current CloudHv logic to new library instances, drivers, etc, (b) adding this enhancement to CloudHv's own instance of PlatformInitLib. Counter-arguments, objections etc welcome -- feel free to prove me wrong (e.g. whatever I'm saying about prior art / status quo). Also I'm not necessarily blocking this patch set; if others don't mind this, they can ACK it (and I won't NACK). BR Laszlo -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#113835): https://edk2.groups.io/g/devel/message/113835 Mute This Topic: https://groups.io/mt/103689730/7686176 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/19134562= 12/xyzzy [rebecca@openfw.io] -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-