On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek <
lersek@redhat.com> wrote:
>
> On 1/12/24 19:31, Thomas Barrett wrote:
> > This series adds support for cloud-hypervisor guests with >1TiB
> > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> >
https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > still occuring for CloudHv guests because the PlatformScanE820
> > utility is not currently supported for CloudHv.
> >
> > My working branch for these changes can be found here:
> >
https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram> >
> > Cc: Anatol Belski <
anbelski@linux.microsoft.com>
> > Cc: Ard Biesheuvel <
ardb+tianocore@kernel.org>
> > Cc: Gerd Hoffmann <
kraxel@redhat.com>
> > Cc: Jianyong Wu <
jianyong.wu@arm.com>
> > Cc: Jiewen Yao <
jiewen.yao@intel.com>
> > Cc: Laszlo Ersek <
lersek@redhat.com>
> > Cc: Rob Bradford <
rbradford@rivosinc.com>
> >
> > Thomas Barrett (3):
> > OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> > OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> > OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> >
> > OvmfPkg/CloudHv/CloudHvX64.dsc | 2 +
> > OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
> > 2 files changed, 79 insertions(+), 30 deletions(-)
> >
>
> Thanks for posting v3, this one looks well-formed, so I can make some
> comments.
>
> TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID
> branches introduced to PlatformInitLib.
>
> OVMF supports multiple hypervisors, and the general idea has been that a
> single module (or even a single platform DSC) should preferably not
> attempt to support multiple hypervisors. That's why we have a separate
> Xen platform, and a big number of Xen-customized, standalone modules.
>
> The idea with this is that any particular developer is very unlikely to
> develop for, and test on, multiple hypervisors. Therefore unification (=
> elimination of all possible code duplication) between distinct
> hypervisor code snippets is actually detrimental for maintenance,
> because it technically enables a developer to regress a platform that
> they never actively work with.
>
> I believe bhyve is similarly separated out (just like Xen).
>
> It's one thing to collect support for multiple board types (machine
> types) for QEMU into a given module; that's still the same hypervisor --
> testing only requires a different "-M" option on the qemu command line.
>
> But fusing Cloud Hypervisor code with QEMU code makes me fidget in my seat.
>
> I've now grepped the OvmfPkg directory tree for existent instances of
> CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has
> seeped into a whole bunch of modules that should only be QEMU. If you
> need those modules customized for CloudHv, it'd be best to detach them
> for CloudHv, and eliminate the dead code (such as anything that depends
> on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv.
> Both communities will benefit. Again, this is all based on the empirical
> fact that QEMU and CloudHv developers don't tend to cross-develop.
>
> I understand that it's always just a small addition; in each specific
> case, it's just one or two more "if"s in common code. But the end result
> is terrible to maintain in the long term.
>
> Of course this all depends on having a separate platform DSC for
> CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
>
> So I'd suggest (a) isolating current CloudHv logic to new library
> instances, drivers, etc, (b) adding this enhancement to CloudHv's own
> instance of PlatformInitLib.
>
> Counter-arguments, objections etc welcome -- feel free to prove me wrong
> (e.g. whatever I'm saying about prior art / status quo).
>
> Also I'm not necessarily blocking this patch set; if others don't mind
> this, they can ACK it (and I won't NACK).
>