From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail02.groups.io (mail02.groups.io [66.175.222.108]) by spool.mail.gandi.net (Postfix) with ESMTPS id 5B5CA941142 for ; Tue, 16 Jan 2024 04:59:45 +0000 (UTC) DKIM-Signature: a=rsa-sha256; bh=n0Y784jm4V3RuEn0wjiifo8PHXUe+mVn8SmZIjsClw0=; c=relaxed/simple; d=groups.io; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject:To:Cc:Precedence:List-Subscribe:List-Help:Sender:List-Id:Mailing-List:Delivered-To:Reply-To:List-Unsubscribe-Post:List-Unsubscribe:Content-Type; s=20140610; t=1705381184; v=1; b=wmRX4XBvjV4zHHWwmswueKbqHq/SqZLQ+eXn0Ze7AVmKjenJa7IWZh7auWOdcyTbVK5+1c9s N42MnMSnQ0j+NHLPf9K414uwAqvX5KmccG3ChsOK5QKmYD2Y5K+6IxRaJnUIVdovcJ+v8CvaYFJ zbtP6xXjW4+/ZG0M5uv4gWGw= X-Received: by 127.0.0.2 with SMTP id BD7uYY7687511xHc5nSGvZOD; Mon, 15 Jan 2024 20:59:44 -0800 X-Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by mx.groups.io with SMTP id smtpd.web10.89575.1705341183375822156 for ; Mon, 15 Jan 2024 09:53:03 -0800 X-Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 16DFBCE180B for ; Mon, 15 Jan 2024 17:53:00 +0000 (UTC) X-Received: by smtp.kernel.org (Postfix) with ESMTPSA id 49A6DC43399 for ; Mon, 15 Jan 2024 17:52:59 +0000 (UTC) X-Received: by mail-lf1-f44.google.com with SMTP id 2adb3069b0e04-50e741123acso10833639e87.0 for ; Mon, 15 Jan 2024 09:52:59 -0800 (PST) X-Gm-Message-State: CPQGaeHrRJP3SXXhMFeWxPVpx7686176AA= X-Google-Smtp-Source: AGHT+IG4H0AK5u/o608X2CBqrHAicc1PjqUBe+pjXH9bghvan6XSISptKgqiOVNLvxIsWdvaojLZoRK0Kioksdd3yWM= X-Received: by 2002:a19:e002:0:b0:50e:aa96:73ed with SMTP id x2-20020a19e002000000b0050eaa9673edmr2161642lfg.136.1705341177484; Mon, 15 Jan 2024 09:52:57 -0800 (PST) MIME-Version: 1.0 References: <30871515-6b4f-fd6c-5b56-d911d5f15463@redhat.com> In-Reply-To: <30871515-6b4f-fd6c-5b56-d911d5f15463@redhat.com> From: "Ard Biesheuvel" Date: Mon, 15 Jan 2024 18:52:46 +0100 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM To: Laszlo Ersek Cc: devel@edk2.groups.io, tbarrett1200@gmail.com, Anatol Belski , Gerd Hoffmann , Jianyong Wu , Jiewen Yao , Rob Bradford Precedence: Bulk List-Subscribe: List-Help: Sender: devel@edk2.groups.io List-Id: Mailing-List: list devel@edk2.groups.io; contact devel+owner@edk2.groups.io Reply-To: devel@edk2.groups.io,ardb@kernel.org List-Unsubscribe-Post: List-Unsubscribe=One-Click List-Unsubscribe: Content-Type: text/plain; charset="UTF-8" X-GND-Status: LEGIT Authentication-Results: spool.mail.gandi.net; dkim=pass header.d=groups.io header.s=20140610 header.b=wmRX4XBv; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (spool.mail.gandi.net: domain of bounce@groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce@groups.io On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek wrote: > > On 1/12/24 19:31, Thomas Barrett wrote: > > This series adds support for cloud-hypervisor guests with >1TiB > > of RAM to Ovmf. This bug was solved for Qemu back in 2017 > > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is > > still occuring for CloudHv guests because the PlatformScanE820 > > utility is not currently supported for CloudHv. > > > > My working branch for these changes can be found here: > > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram > > > > Cc: Anatol Belski > > Cc: Ard Biesheuvel > > Cc: Gerd Hoffmann > > Cc: Jianyong Wu > > Cc: Jiewen Yao > > Cc: Laszlo Ersek > > Cc: Rob Bradford > > > > Thomas Barrett (3): > > OvmfPkg: Add CloudHv support to PlatformScanE820 utility function. > > OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv > > OvmfPkg: CloudHv: Enable PcdUse1GPageTable > > > > OvmfPkg/CloudHv/CloudHvX64.dsc | 2 + > > OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------ > > 2 files changed, 79 insertions(+), 30 deletions(-) > > > > Thanks for posting v3, this one looks well-formed, so I can make some > comments. > > TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID > branches introduced to PlatformInitLib. > > OVMF supports multiple hypervisors, and the general idea has been that a > single module (or even a single platform DSC) should preferably not > attempt to support multiple hypervisors. That's why we have a separate > Xen platform, and a big number of Xen-customized, standalone modules. > > The idea with this is that any particular developer is very unlikely to > develop for, and test on, multiple hypervisors. Therefore unification (= > elimination of all possible code duplication) between distinct > hypervisor code snippets is actually detrimental for maintenance, > because it technically enables a developer to regress a platform that > they never actively work with. > > I believe bhyve is similarly separated out (just like Xen). > > It's one thing to collect support for multiple board types (machine > types) for QEMU into a given module; that's still the same hypervisor -- > testing only requires a different "-M" option on the qemu command line. > > But fusing Cloud Hypervisor code with QEMU code makes me fidget in my seat. > > I've now grepped the OvmfPkg directory tree for existent instances of > CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has > seeped into a whole bunch of modules that should only be QEMU. If you > need those modules customized for CloudHv, it'd be best to detach them > for CloudHv, and eliminate the dead code (such as anything that depends > on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv. > Both communities will benefit. Again, this is all based on the empirical > fact that QEMU and CloudHv developers don't tend to cross-develop. > > I understand that it's always just a small addition; in each specific > case, it's just one or two more "if"s in common code. But the end result > is terrible to maintain in the long term. > > Of course this all depends on having a separate platform DSC for > CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc". > > So I'd suggest (a) isolating current CloudHv logic to new library > instances, drivers, etc, (b) adding this enhancement to CloudHv's own > instance of PlatformInitLib. > > Counter-arguments, objections etc welcome -- feel free to prove me wrong > (e.g. whatever I'm saying about prior art / status quo). > > Also I'm not necessarily blocking this patch set; if others don't mind > this, they can ACK it (and I won't NACK). > Thanks Laszlo. I'm afraid I seem to have made your last point moot :-) But I agree with your points above: EDK2 makes the fork&tweak approach very easy, so CloudHv can keep its own versions of many of the QEMU specific glue libraries. It does, however, require a certain degree of hygiene on the part of the developer to introduce abstractions where possible, to avoid forking huge drivers or libraries for a 2 line delta between QEMU and CloudHv. So let's decree that future CloudHv contributions that follow this old pattern will not be considered until/unless there is some confidence on our part that there is a long term plan in motion that cleans this all up and repays the accumulated technical debt. -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#113836): https://edk2.groups.io/g/devel/message/113836 Mute This Topic: https://groups.io/mt/103689730/7686176 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io] -=-=-=-=-=-=-=-=-=-=-=-