public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Anatol Belski" <anbelski@linux.microsoft.com>
To: devel@edk2.groups.io, tbarrett1200@gmail.com,
	Ard Biesheuvel <ardb@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>,
	Jianyong Wu <jianyong.wu@arm.com>,
	 Jiewen Yao <jiewen.yao@intel.com>,
	Laszlo Ersek <lersek@redhat.com>,
	Rob Bradford <rbradford@rivosinc.com>
Subject: Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
Date: Tue, 16 Jan 2024 07:40:32 +0100	[thread overview]
Message-ID: <0e25a41073f5eeec01e7be6017b75208f416a697.camel@linux.microsoft.com> (raw)
In-Reply-To: <CACjYpRCAFVyRvBH-RDoOufU-c80DWk9kADgYaT=Q5vaYXP=CKg@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 6547 bytes --]

Hi,

i was late to the party as well. At least i went now for a manual build
and some tests with Cloud Hypervisor and it looks good.

On Mon, 2024-01-15 at 10:26 -0800, Thomas Barrett wrote:
> Hey all,
> 
> Yeah I don’t disagree Laszlo. While developing these patches, It was
> pretty confusing to determine which parts of PlatformLibInit were
> compatible with CloudHv and which parts weren’t.
> 
> I have no idea how much work would be involved in splitting out
> cloud-hypervisor code. It would be good to hear input from the cloud-
> hypervisor maintainers on this.
> 
Such a refactoring appears to be for sure meaningful. We should create
an issue on the Cloud Hypervisor side so this topic isn't lost and so
we can track it there, too.

Thanks!

Anatol


> Best,
> Thomas
> 
> On Mon, Jan 15, 2024 at 9:52 AM Ard Biesheuvel <ardb@kernel.org>
> wrote:
> > On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek <lersek@redhat.com>
> > wrote:
> > >
> > > On 1/12/24 19:31, Thomas Barrett wrote:
> > > > This series adds support for cloud-hypervisor guests with >1TiB
> > > > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > > > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > > > still occuring for CloudHv guests because the PlatformScanE820
> > > > utility is not currently supported for CloudHv.
> > > >
> > > > My working branch for these changes can be found here:
> > > > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> > > >
> > > > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > > > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > > > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > > > Cc: Laszlo Ersek <lersek@redhat.com>
> > > > Cc: Rob Bradford <rbradford@rivosinc.com>
> > > >
> > > > Thomas Barrett (3):
> > > >   OvmfPkg: Add CloudHv support to PlatformScanE820 utility
> > function.
> > > >   OvmfPkg: Update PlatformAddressWidthInitialization for
> > CloudHv
> > > >   OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> > > >
> > > >  OvmfPkg/CloudHv/CloudHvX64.dsc              |   2 +
> > > >  OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107
> > ++++++++++++++------
> > > >  2 files changed, 79 insertions(+), 30 deletions(-)
> > > >
> > >
> > > Thanks for posting v3, this one looks well-formed, so I can make
> > some
> > > comments.
> > >
> > > TBH, I'm super uncomfortable with a new bunch of
> > CLOUDHV_DEVICE_ID
> > > branches introduced to PlatformInitLib.
> > >
> > > OVMF supports multiple hypervisors, and the general idea has been
> > that a
> > > single module (or even a single platform DSC) should preferably
> > not
> > > attempt to support multiple hypervisors. That's why we have a
> > separate
> > > Xen platform, and a big number of Xen-customized, standalone
> > modules.
> > >
> > > The idea with this is that any particular developer is very
> > unlikely to
> > > develop for, and test on, multiple hypervisors. Therefore
> > unification (=
> > > elimination of all possible code duplication) between distinct
> > > hypervisor code snippets is actually detrimental for maintenance,
> > > because it technically enables a developer to regress a platform
> > that
> > > they never actively work with.
> > >
> > > I believe bhyve is similarly separated out (just like Xen).
> > >
> > > It's one thing to collect support for multiple board types
> > (machine
> > > types) for QEMU into a given module; that's still the same
> > hypervisor --
> > > testing only requires a different "-M" option on the qemu command
> > line.
> > >
> > > But fusing Cloud Hypervisor code with QEMU code makes me fidget
> > in my seat.
> > >
> > > I've now grepped the OvmfPkg directory tree for existent
> > instances of
> > > CLOUDHV_DEVICE_ID, and I'm very much not liking the result list.
> > It has
> > > seeped into a whole bunch of modules that should only be QEMU. If
> > you
> > > need those modules customized for CloudHv, it'd be best to detach
> > them
> > > for CloudHv, and eliminate the dead code (such as anything that
> > depends
> > > on QEMU fw_cfg), and *enforce* that the underlying platform is
> > CloudHv.
> > > Both communities will benefit. Again, this is all based on the
> > empirical
> > > fact that QEMU and CloudHv developers don't tend to cross-
> > develop.
> > >
> > > I understand that it's always just a small addition; in each
> > specific
> > > case, it's just one or two more "if"s in common code. But the end
> > result
> > > is terrible to maintain in the long term.
> > >
> > > Of course this all depends on having a separate platform DSC for
> > > CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
> > >
> > > So I'd suggest (a) isolating current CloudHv logic to new library
> > > instances, drivers, etc, (b) adding this enhancement to CloudHv's
> > own
> > > instance of PlatformInitLib.
> > >
> > > Counter-arguments, objections etc welcome -- feel free to prove
> > me wrong
> > > (e.g. whatever I'm saying about prior art / status quo).
> > >
> > > Also I'm not necessarily blocking this patch set; if others don't
> > mind
> > > this, they can ACK it (and I won't NACK).
> > >
> > 
> > Thanks Laszlo.
> > 
> > I'm afraid I seem to have made your last point moot :-)
> > 
> > But I agree with your points above: EDK2 makes the fork&tweak
> > approach
> > very easy, so CloudHv can keep its own versions of many of the QEMU
> > specific glue libraries. It does, however, require a certain degree
> > of
> > hygiene on the part of the developer to introduce abstractions
> > where
> > possible, to avoid forking huge drivers or libraries for a 2 line
> > delta between QEMU and CloudHv.
> > 
> > So let's decree that future CloudHv contributions that follow this
> > old
> > pattern will not be considered until/unless there is some
> > confidence
> > on our part that there is a long term plan in motion that cleans
> > this
> > all up and repays the accumulated technical debt.
> 



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113866): https://edk2.groups.io/g/devel/message/113866
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-



[-- Attachment #2: Type: text/html, Size: 8585 bytes --]

      parent reply	other threads:[~2024-01-16  6:40 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-12 18:31 [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 1/3] OvmfPkg: Add CloudHv support to PlatformScanE820 utility function Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 2/3] OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 3/3] OvmfPkg: CloudHv: Enable PcdUse1GPageTable Thomas Barrett
2024-01-15 15:27 ` [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Gerd Hoffmann
2024-01-15 15:32   ` Ard Biesheuvel
2024-01-15 16:13     ` Ard Biesheuvel
2024-01-15 19:10       ` Laszlo Ersek
2024-01-15 17:39 ` Laszlo Ersek
2024-01-15 17:52   ` Ard Biesheuvel
2024-01-15 18:26     ` Thomas Barrett
2024-01-15 22:07       ` Anatol Belski
2024-01-16  6:40       ` Anatol Belski [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0e25a41073f5eeec01e7be6017b75208f416a697.camel@linux.microsoft.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox