public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Laszlo Ersek" <lersek@redhat.com>
To: devel@edk2.groups.io, tbarrett1200@gmail.com
Cc: Anatol Belski <anbelski@linux.microsoft.com>,
	Ard Biesheuvel <ardb+tianocore@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Jianyong Wu <jianyong.wu@arm.com>,
	Jiewen Yao <jiewen.yao@intel.com>,
	Rob Bradford <rbradford@rivosinc.com>
Subject: Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Date: Mon, 15 Jan 2024 18:39:56 +0100	[thread overview]
Message-ID: <30871515-6b4f-fd6c-5b56-d911d5f15463@redhat.com> (raw)
In-Reply-To: <cover.1705084003.git.tbarrett1200@gmail.com>

On 1/12/24 19:31, Thomas Barrett wrote:
> This series adds support for cloud-hypervisor guests with >1TiB
> of RAM to Ovmf. This bug was solved for Qemu back in 2017
> https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> still occuring for CloudHv guests because the PlatformScanE820
> utility is not currently supported for CloudHv.
> 
> My working branch for these changes can be found here:
> https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> 
> Cc: Anatol Belski <anbelski@linux.microsoft.com>
> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Jianyong Wu <jianyong.wu@arm.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Rob Bradford <rbradford@rivosinc.com>
> 
> Thomas Barrett (3):
>   OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
>   OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
>   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> 
>  OvmfPkg/CloudHv/CloudHvX64.dsc              |   2 +
>  OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
>  2 files changed, 79 insertions(+), 30 deletions(-)
> 

Thanks for posting v3, this one looks well-formed, so I can make some
comments.

TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID
branches introduced to PlatformInitLib.

OVMF supports multiple hypervisors, and the general idea has been that a
single module (or even a single platform DSC) should preferably not
attempt to support multiple hypervisors. That's why we have a separate
Xen platform, and a big number of Xen-customized, standalone modules.

The idea with this is that any particular developer is very unlikely to
develop for, and test on, multiple hypervisors. Therefore unification (=
elimination of all possible code duplication) between distinct
hypervisor code snippets is actually detrimental for maintenance,
because it technically enables a developer to regress a platform that
they never actively work with.

I believe bhyve is similarly separated out (just like Xen).

It's one thing to collect support for multiple board types (machine
types) for QEMU into a given module; that's still the same hypervisor --
testing only requires a different "-M" option on the qemu command line.

But fusing Cloud Hypervisor code with QEMU code makes me fidget in my seat.

I've now grepped the OvmfPkg directory tree for existent instances of
CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has
seeped into a whole bunch of modules that should only be QEMU. If you
need those modules customized for CloudHv, it'd be best to detach them
for CloudHv, and eliminate the dead code (such as anything that depends
on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv.
Both communities will benefit. Again, this is all based on the empirical
fact that QEMU and CloudHv developers don't tend to cross-develop.

I understand that it's always just a small addition; in each specific
case, it's just one or two more "if"s in common code. But the end result
is terrible to maintain in the long term.

Of course this all depends on having a separate platform DSC for
CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".

So I'd suggest (a) isolating current CloudHv logic to new library
instances, drivers, etc, (b) adding this enhancement to CloudHv's own
instance of PlatformInitLib.

Counter-arguments, objections etc welcome -- feel free to prove me wrong
(e.g. whatever I'm saying about prior art / status quo).

Also I'm not necessarily blocking this patch set; if others don't mind
this, they can ACK it (and I won't NACK).

BR
Laszlo



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113835): https://edk2.groups.io/g/devel/message/113835
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-



  parent reply	other threads:[~2024-01-16  4:46 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-12 18:31 [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 1/3] OvmfPkg: Add CloudHv support to PlatformScanE820 utility function Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 2/3] OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 3/3] OvmfPkg: CloudHv: Enable PcdUse1GPageTable Thomas Barrett
2024-01-15 15:27 ` [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Gerd Hoffmann
2024-01-15 15:32   ` Ard Biesheuvel
2024-01-15 16:13     ` Ard Biesheuvel
2024-01-15 19:10       ` Laszlo Ersek
2024-01-15 17:39 ` Laszlo Ersek [this message]
2024-01-15 17:52   ` Ard Biesheuvel
2024-01-15 18:26     ` Thomas Barrett
2024-01-15 22:07       ` Anatol Belski
2024-01-16  6:40       ` Anatol Belski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=30871515-6b4f-fd6c-5b56-d911d5f15463@redhat.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox