* [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
@ 2024-01-12 18:31 Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 1/3] OvmfPkg: Add CloudHv support to PlatformScanE820 utility function Thomas Barrett
` (4 more replies)
0 siblings, 5 replies; 13+ messages in thread
From: Thomas Barrett @ 2024-01-12 18:31 UTC (permalink / raw)
To: devel
Cc: Thomas Barrett, Anatol Belski, Ard Biesheuvel, Gerd Hoffmann,
Jianyong Wu, Jiewen Yao, Laszlo Ersek, Rob Bradford
This series adds support for cloud-hypervisor guests with >1TiB
of RAM to Ovmf. This bug was solved for Qemu back in 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
still occuring for CloudHv guests because the PlatformScanE820
utility is not currently supported for CloudHv.
My working branch for these changes can be found here:
https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
Cc: Anatol Belski <anbelski@linux.microsoft.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jianyong Wu <jianyong.wu@arm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rob Bradford <rbradford@rivosinc.com>
Thomas Barrett (3):
OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
OvmfPkg: CloudHv: Enable PcdUse1GPageTable
OvmfPkg/CloudHv/CloudHvX64.dsc | 2 +
OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
2 files changed, 79 insertions(+), 30 deletions(-)
--
2.34.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113771): https://edk2.groups.io/g/devel/message/113771
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 13+ messages in thread
* [edk2-devel] [PATCH v3 1/3] OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
2024-01-12 18:31 [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Thomas Barrett
@ 2024-01-12 18:31 ` Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 2/3] OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv Thomas Barrett
` (3 subsequent siblings)
4 siblings, 0 replies; 13+ messages in thread
From: Thomas Barrett @ 2024-01-12 18:31 UTC (permalink / raw)
To: devel
Cc: Thomas Barrett, Anatol Belski, Ard Biesheuvel, Gerd Hoffmann,
Jianyong Wu, Jiewen Yao, Laszlo Ersek, Rob Bradford
From: Thomas Barrett <tbarrett@crusoeenergy.com>
The PlatformScanE820 utility function is not currently compatible
with CloudHv since it relies on the prescence of the "etc/e820"
QemuFwCfg file. Update the PlatformScanE820 to iterate through the
PVH e820 entries when running on a CloudHv guest.
Cc: Anatol Belski <anbelski@linux.microsoft.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jianyong Wu <jianyong.wu@arm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rob Bradford <rbradford@rivosinc.com>
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
---
OvmfPkg/Library/PlatformInitLib/MemDetect.c | 95 ++++++++++++++-------
1 file changed, 65 insertions(+), 30 deletions(-)
diff --git a/OvmfPkg/Library/PlatformInitLib/MemDetect.c b/OvmfPkg/Library/PlatformInitLib/MemDetect.c
index 662e7e85bb..76a9dc9211 100644
--- a/OvmfPkg/Library/PlatformInitLib/MemDetect.c
+++ b/OvmfPkg/Library/PlatformInitLib/MemDetect.c
@@ -248,6 +248,67 @@ PlatformReservationConflictCB (
PlatformInfoHob->PcdPciMmio64Base = NewBase;
}
+/**
+ Returns PVH memmap
+ @param Entries Pointer to PVH memmap
+ @param Count Number of entries
+ @return EFI_STATUS
+**/
+EFI_STATUS
+GetPvhMemmapEntries (
+ struct hvm_memmap_table_entry **Entries,
+ UINT32 *Count
+ )
+{
+ UINT32 *PVHResetVectorData;
+ struct hvm_start_info *pvh_start_info;
+
+ PVHResetVectorData = (VOID *)(UINTN)PcdGet32 (PcdXenPvhStartOfDayStructPtr);
+ if (PVHResetVectorData == 0) {
+ return EFI_NOT_FOUND;
+ }
+
+ pvh_start_info = (struct hvm_start_info *)(UINTN)PVHResetVectorData[0];
+
+ *Entries = (struct hvm_memmap_table_entry *)(UINTN)pvh_start_info->memmap_paddr;
+ *Count = pvh_start_info->memmap_entries;
+
+ return EFI_SUCCESS;
+}
+
+STATIC
+EFI_STATUS
+PlatformScanE820Pvh (
+ IN E820_SCAN_CALLBACK Callback,
+ IN OUT EFI_HOB_PLATFORM_INFO *PlatformInfoHob
+ )
+{
+ struct hvm_memmap_table_entry *Memmap;
+ UINT32 MemmapEntriesCount;
+ struct hvm_memmap_table_entry *Entry;
+ EFI_E820_ENTRY64 E820Entry;
+ EFI_STATUS Status;
+ UINT32 Loop;
+
+ Status = GetPvhMemmapEntries (&Memmap, &MemmapEntriesCount);
+ if (EFI_ERROR (Status)) {
+ return Status;
+ }
+
+ for (Loop = 0; Loop < MemmapEntriesCount; Loop++) {
+ Entry = Memmap + Loop;
+
+ if (Entry->type == XEN_HVM_MEMMAP_TYPE_RAM) {
+ E820Entry.BaseAddr = Entry->addr;
+ E820Entry.Length = Entry->size;
+ E820Entry.Type = Entry->type;
+ Callback (&E820Entry, PlatformInfoHob);
+ }
+ }
+
+ return EFI_SUCCESS;
+}
+
/**
Iterate over the entries in QEMU's fw_cfg E820 RAM map, call the
passed callback for each entry.
@@ -279,6 +340,10 @@ PlatformScanE820 (
EFI_E820_ENTRY64 E820Entry;
UINTN Processed;
+ if (PlatformInfoHob->HostBridgeDevId == CLOUDHV_DEVICE_ID) {
+ return PlatformScanE820Pvh (Callback, PlatformInfoHob);
+ }
+
Status = QemuFwCfgFindFile ("etc/e820", &FwCfgItem, &FwCfgSize);
if (EFI_ERROR (Status)) {
return Status;
@@ -297,36 +362,6 @@ PlatformScanE820 (
return EFI_SUCCESS;
}
-/**
- Returns PVH memmap
-
- @param Entries Pointer to PVH memmap
- @param Count Number of entries
-
- @return EFI_STATUS
-**/
-EFI_STATUS
-GetPvhMemmapEntries (
- struct hvm_memmap_table_entry **Entries,
- UINT32 *Count
- )
-{
- UINT32 *PVHResetVectorData;
- struct hvm_start_info *pvh_start_info;
-
- PVHResetVectorData = (VOID *)(UINTN)PcdGet32 (PcdXenPvhStartOfDayStructPtr);
- if (PVHResetVectorData == 0) {
- return EFI_NOT_FOUND;
- }
-
- pvh_start_info = (struct hvm_start_info *)(UINTN)PVHResetVectorData[0];
-
- *Entries = (struct hvm_memmap_table_entry *)(UINTN)pvh_start_info->memmap_paddr;
- *Count = pvh_start_info->memmap_entries;
-
- return EFI_SUCCESS;
-}
-
STATIC
UINT64
GetHighestSystemMemoryAddressFromPvhMemmap (
--
2.34.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113772): https://edk2.groups.io/g/devel/message/113772
Mute This Topic: https://groups.io/mt/103689731/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [edk2-devel] [PATCH v3 2/3] OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
2024-01-12 18:31 [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 1/3] OvmfPkg: Add CloudHv support to PlatformScanE820 utility function Thomas Barrett
@ 2024-01-12 18:31 ` Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 3/3] OvmfPkg: CloudHv: Enable PcdUse1GPageTable Thomas Barrett
` (2 subsequent siblings)
4 siblings, 0 replies; 13+ messages in thread
From: Thomas Barrett @ 2024-01-12 18:31 UTC (permalink / raw)
To: devel
Cc: Thomas Barrett, Anatol Belski, Ard Biesheuvel, Gerd Hoffmann,
Jianyong Wu, Jiewen Yao, Laszlo Ersek, Rob Bradford
From: Thomas Barrett <tbarrett@crusoeenergy.com>
In addition to initializing the PhysMemAddressWidth and
FirstNonAddress fields in PlatformInfoHob, the
PlatformAddressWidthInitialization function is responsible
for initializing the PcdPciMmio64Base and PcdPciMmio64Size
fields.
Currently, for CloudHv guests, the PcdPciMmio64Base is
placed immediately after either the 4G boundary or the
last RAM region, whichever is greater. We do not change
this behavior.
Previously, when booting CloudHv guests with greater than
1TiB of high memory, the PlatformAddressWidthInitialization
function incorrect calculates the amount of RAM using the
overflowed 24-bit CMOS register.
Now, we update the PlatformAddressWidthInitialization
behavior on CloudHv to scan the E820 entries to detect
the amount of RAM. This allows CloudHv guests to boot with
greater than 1TiB of RAM
Cc: Anatol Belski <anbelski@linux.microsoft.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jianyong Wu <jianyong.wu@arm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rob Bradford <rbradford@rivosinc.com>
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
---
OvmfPkg/Library/PlatformInitLib/MemDetect.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/OvmfPkg/Library/PlatformInitLib/MemDetect.c b/OvmfPkg/Library/PlatformInitLib/MemDetect.c
index 76a9dc9211..f042517bb6 100644
--- a/OvmfPkg/Library/PlatformInitLib/MemDetect.c
+++ b/OvmfPkg/Library/PlatformInitLib/MemDetect.c
@@ -873,6 +873,18 @@ PlatformAddressWidthInitialization (
if (PlatformInfoHob->HostBridgeDevId == 0xffff /* microvm */) {
PlatformAddressWidthFromCpuid (PlatformInfoHob, FALSE);
+ return;
+ } else if (PlatformInfoHob->HostBridgeDevId == CLOUDHV_DEVICE_ID) {
+ PlatformInfoHob->FirstNonAddress = BASE_4GB;
+ Status = PlatformScanE820 (PlatformGetFirstNonAddressCB, PlatformInfoHob);
+ if (EFI_ERROR (Status)) {
+ PlatformInfoHob->FirstNonAddress = BASE_4GB + PlatformGetSystemMemorySizeAbove4gb ();
+ }
+
+ PlatformInfoHob->PcdPciMmio64Base = PlatformInfoHob->FirstNonAddress;
+ PlatformAddressWidthFromCpuid (PlatformInfoHob, FALSE);
+ PlatformInfoHob->PcdPciMmio64Size = PlatformInfoHob->FirstNonAddress - PlatformInfoHob->PcdPciMmio64Base;
+
return;
}
--
2.34.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113773): https://edk2.groups.io/g/devel/message/113773
Mute This Topic: https://groups.io/mt/103689732/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [edk2-devel] [PATCH v3 3/3] OvmfPkg: CloudHv: Enable PcdUse1GPageTable
2024-01-12 18:31 [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 1/3] OvmfPkg: Add CloudHv support to PlatformScanE820 utility function Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 2/3] OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv Thomas Barrett
@ 2024-01-12 18:31 ` Thomas Barrett
2024-01-15 15:27 ` [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Gerd Hoffmann
2024-01-15 17:39 ` Laszlo Ersek
4 siblings, 0 replies; 13+ messages in thread
From: Thomas Barrett @ 2024-01-12 18:31 UTC (permalink / raw)
To: devel
Cc: Thomas Barrett, Anatol Belski, Ard Biesheuvel, Gerd Hoffmann,
Jianyong Wu, Jiewen Yao, Laszlo Ersek, Rob Bradford
From: Thomas Barrett <tbarrett@crusoeenergy.com>
Without enabling PcdUse1GPageTable, CloudHv guests are limited
to a 40-bit address space, even if the hardware supports more.
This limits the amount of RAM to 1TiB of CloudHv guests.
Cc: Anatol Belski <anbelski@linux.microsoft.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Jianyong Wu <jianyong.wu@arm.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Rob Bradford <rbradford@rivosinc.com>
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
---
OvmfPkg/CloudHv/CloudHvX64.dsc | 2 ++
1 file changed, 2 insertions(+)
diff --git a/OvmfPkg/CloudHv/CloudHvX64.dsc b/OvmfPkg/CloudHv/CloudHvX64.dsc
index af594959a9..b522fa1059 100644
--- a/OvmfPkg/CloudHv/CloudHvX64.dsc
+++ b/OvmfPkg/CloudHv/CloudHvX64.dsc
@@ -566,6 +566,8 @@
# Point to the MdeModulePkg/Application/UiApp/UiApp.inf
gEfiMdeModulePkgTokenSpaceGuid.PcdBootManagerMenuFile|{ 0x21, 0xaa, 0x2c, 0x46, 0x14, 0x76, 0x03, 0x45, 0x83, 0x6e, 0x8a, 0xb6, 0xf4, 0x66, 0x23, 0x31 }
+ gEfiMdeModulePkgTokenSpaceGuid.PcdUse1GPageTable|TRUE
+
################################################################################
#
# Pcd Dynamic Section - list of all EDK II PCD Entries defined by this Platform
--
2.34.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113774): https://edk2.groups.io/g/devel/message/113774
Mute This Topic: https://groups.io/mt/103689733/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
2024-01-12 18:31 [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Thomas Barrett
` (2 preceding siblings ...)
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 3/3] OvmfPkg: CloudHv: Enable PcdUse1GPageTable Thomas Barrett
@ 2024-01-15 15:27 ` Gerd Hoffmann
2024-01-15 15:32 ` Ard Biesheuvel
2024-01-15 17:39 ` Laszlo Ersek
4 siblings, 1 reply; 13+ messages in thread
From: Gerd Hoffmann @ 2024-01-15 15:27 UTC (permalink / raw)
To: Thomas Barrett
Cc: devel, Anatol Belski, Ard Biesheuvel, Jianyong Wu, Jiewen Yao,
Laszlo Ersek, Rob Bradford
On Fri, Jan 12, 2024 at 06:31:41PM +0000, Thomas Barrett wrote:
> This series adds support for cloud-hypervisor guests with >1TiB
> of RAM to Ovmf. This bug was solved for Qemu back in 2017
> https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> still occuring for CloudHv guests because the PlatformScanE820
> utility is not currently supported for CloudHv.
>
> My working branch for these changes can be found here:
> https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
>
> Cc: Anatol Belski <anbelski@linux.microsoft.com>
> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Jianyong Wu <jianyong.wu@arm.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Rob Bradford <rbradford@rivosinc.com>
>
> Thomas Barrett (3):
> OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> OvmfPkg: CloudHv: Enable PcdUse1GPageTable
Series:
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
take care,
Gerd
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113825): https://edk2.groups.io/g/devel/message/113825
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
2024-01-15 15:27 ` [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Gerd Hoffmann
@ 2024-01-15 15:32 ` Ard Biesheuvel
2024-01-15 16:13 ` Ard Biesheuvel
0 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2024-01-15 15:32 UTC (permalink / raw)
To: Gerd Hoffmann
Cc: Thomas Barrett, devel, Anatol Belski, Ard Biesheuvel, Jianyong Wu,
Jiewen Yao, Laszlo Ersek, Rob Bradford
On Mon, 15 Jan 2024 at 16:27, Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Fri, Jan 12, 2024 at 06:31:41PM +0000, Thomas Barrett wrote:
> > This series adds support for cloud-hypervisor guests with >1TiB
> > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > still occuring for CloudHv guests because the PlatformScanE820
> > utility is not currently supported for CloudHv.
> >
> > My working branch for these changes can be found here:
> > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> >
> > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Cc: Rob Bradford <rbradford@rivosinc.com>
> >
> > Thomas Barrett (3):
> > OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> > OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> > OvmfPkg: CloudHv: Enable PcdUse1GPageTable
>
> Series:
> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
>
Thanks, I've queued this up.
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113826): https://edk2.groups.io/g/devel/message/113826
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
2024-01-15 15:32 ` Ard Biesheuvel
@ 2024-01-15 16:13 ` Ard Biesheuvel
2024-01-15 19:10 ` Laszlo Ersek
0 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2024-01-15 16:13 UTC (permalink / raw)
To: Gerd Hoffmann
Cc: Thomas Barrett, devel, Anatol Belski, Jianyong Wu, Jiewen Yao,
Laszlo Ersek, Rob Bradford
On Mon, 15 Jan 2024 at 16:32, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> On Mon, 15 Jan 2024 at 16:27, Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> > On Fri, Jan 12, 2024 at 06:31:41PM +0000, Thomas Barrett wrote:
> > > This series adds support for cloud-hypervisor guests with >1TiB
> > > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > > still occuring for CloudHv guests because the PlatformScanE820
> > > utility is not currently supported for CloudHv.
> > >
> > > My working branch for these changes can be found here:
> > > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> > >
> > > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > > Cc: Laszlo Ersek <lersek@redhat.com>
> > > Cc: Rob Bradford <rbradford@rivosinc.com>
> > >
> > > Thomas Barrett (3):
> > > OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> > > OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> > > OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> >
> > Series:
> > Acked-by: Gerd Hoffmann <kraxel@redhat.com>
> >
>
> Thanks, I've queued this up.
Merged as #5262
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113834): https://edk2.groups.io/g/devel/message/113834
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
2024-01-12 18:31 [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Thomas Barrett
` (3 preceding siblings ...)
2024-01-15 15:27 ` [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Gerd Hoffmann
@ 2024-01-15 17:39 ` Laszlo Ersek
2024-01-15 17:52 ` Ard Biesheuvel
4 siblings, 1 reply; 13+ messages in thread
From: Laszlo Ersek @ 2024-01-15 17:39 UTC (permalink / raw)
To: devel, tbarrett1200
Cc: Anatol Belski, Ard Biesheuvel, Gerd Hoffmann, Jianyong Wu,
Jiewen Yao, Rob Bradford
On 1/12/24 19:31, Thomas Barrett wrote:
> This series adds support for cloud-hypervisor guests with >1TiB
> of RAM to Ovmf. This bug was solved for Qemu back in 2017
> https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> still occuring for CloudHv guests because the PlatformScanE820
> utility is not currently supported for CloudHv.
>
> My working branch for these changes can be found here:
> https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
>
> Cc: Anatol Belski <anbelski@linux.microsoft.com>
> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Jianyong Wu <jianyong.wu@arm.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Rob Bradford <rbradford@rivosinc.com>
>
> Thomas Barrett (3):
> OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> OvmfPkg: CloudHv: Enable PcdUse1GPageTable
>
> OvmfPkg/CloudHv/CloudHvX64.dsc | 2 +
> OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
> 2 files changed, 79 insertions(+), 30 deletions(-)
>
Thanks for posting v3, this one looks well-formed, so I can make some
comments.
TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID
branches introduced to PlatformInitLib.
OVMF supports multiple hypervisors, and the general idea has been that a
single module (or even a single platform DSC) should preferably not
attempt to support multiple hypervisors. That's why we have a separate
Xen platform, and a big number of Xen-customized, standalone modules.
The idea with this is that any particular developer is very unlikely to
develop for, and test on, multiple hypervisors. Therefore unification (=
elimination of all possible code duplication) between distinct
hypervisor code snippets is actually detrimental for maintenance,
because it technically enables a developer to regress a platform that
they never actively work with.
I believe bhyve is similarly separated out (just like Xen).
It's one thing to collect support for multiple board types (machine
types) for QEMU into a given module; that's still the same hypervisor --
testing only requires a different "-M" option on the qemu command line.
But fusing Cloud Hypervisor code with QEMU code makes me fidget in my seat.
I've now grepped the OvmfPkg directory tree for existent instances of
CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has
seeped into a whole bunch of modules that should only be QEMU. If you
need those modules customized for CloudHv, it'd be best to detach them
for CloudHv, and eliminate the dead code (such as anything that depends
on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv.
Both communities will benefit. Again, this is all based on the empirical
fact that QEMU and CloudHv developers don't tend to cross-develop.
I understand that it's always just a small addition; in each specific
case, it's just one or two more "if"s in common code. But the end result
is terrible to maintain in the long term.
Of course this all depends on having a separate platform DSC for
CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
So I'd suggest (a) isolating current CloudHv logic to new library
instances, drivers, etc, (b) adding this enhancement to CloudHv's own
instance of PlatformInitLib.
Counter-arguments, objections etc welcome -- feel free to prove me wrong
(e.g. whatever I'm saying about prior art / status quo).
Also I'm not necessarily blocking this patch set; if others don't mind
this, they can ACK it (and I won't NACK).
BR
Laszlo
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113835): https://edk2.groups.io/g/devel/message/113835
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
2024-01-15 17:39 ` Laszlo Ersek
@ 2024-01-15 17:52 ` Ard Biesheuvel
2024-01-15 18:26 ` Thomas Barrett
0 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2024-01-15 17:52 UTC (permalink / raw)
To: Laszlo Ersek
Cc: devel, tbarrett1200, Anatol Belski, Gerd Hoffmann, Jianyong Wu,
Jiewen Yao, Rob Bradford
On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek <lersek@redhat.com> wrote:
>
> On 1/12/24 19:31, Thomas Barrett wrote:
> > This series adds support for cloud-hypervisor guests with >1TiB
> > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > still occuring for CloudHv guests because the PlatformScanE820
> > utility is not currently supported for CloudHv.
> >
> > My working branch for these changes can be found here:
> > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> >
> > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Laszlo Ersek <lersek@redhat.com>
> > Cc: Rob Bradford <rbradford@rivosinc.com>
> >
> > Thomas Barrett (3):
> > OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> > OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> > OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> >
> > OvmfPkg/CloudHv/CloudHvX64.dsc | 2 +
> > OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
> > 2 files changed, 79 insertions(+), 30 deletions(-)
> >
>
> Thanks for posting v3, this one looks well-formed, so I can make some
> comments.
>
> TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID
> branches introduced to PlatformInitLib.
>
> OVMF supports multiple hypervisors, and the general idea has been that a
> single module (or even a single platform DSC) should preferably not
> attempt to support multiple hypervisors. That's why we have a separate
> Xen platform, and a big number of Xen-customized, standalone modules.
>
> The idea with this is that any particular developer is very unlikely to
> develop for, and test on, multiple hypervisors. Therefore unification (=
> elimination of all possible code duplication) between distinct
> hypervisor code snippets is actually detrimental for maintenance,
> because it technically enables a developer to regress a platform that
> they never actively work with.
>
> I believe bhyve is similarly separated out (just like Xen).
>
> It's one thing to collect support for multiple board types (machine
> types) for QEMU into a given module; that's still the same hypervisor --
> testing only requires a different "-M" option on the qemu command line.
>
> But fusing Cloud Hypervisor code with QEMU code makes me fidget in my seat.
>
> I've now grepped the OvmfPkg directory tree for existent instances of
> CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has
> seeped into a whole bunch of modules that should only be QEMU. If you
> need those modules customized for CloudHv, it'd be best to detach them
> for CloudHv, and eliminate the dead code (such as anything that depends
> on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv.
> Both communities will benefit. Again, this is all based on the empirical
> fact that QEMU and CloudHv developers don't tend to cross-develop.
>
> I understand that it's always just a small addition; in each specific
> case, it's just one or two more "if"s in common code. But the end result
> is terrible to maintain in the long term.
>
> Of course this all depends on having a separate platform DSC for
> CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
>
> So I'd suggest (a) isolating current CloudHv logic to new library
> instances, drivers, etc, (b) adding this enhancement to CloudHv's own
> instance of PlatformInitLib.
>
> Counter-arguments, objections etc welcome -- feel free to prove me wrong
> (e.g. whatever I'm saying about prior art / status quo).
>
> Also I'm not necessarily blocking this patch set; if others don't mind
> this, they can ACK it (and I won't NACK).
>
Thanks Laszlo.
I'm afraid I seem to have made your last point moot :-)
But I agree with your points above: EDK2 makes the fork&tweak approach
very easy, so CloudHv can keep its own versions of many of the QEMU
specific glue libraries. It does, however, require a certain degree of
hygiene on the part of the developer to introduce abstractions where
possible, to avoid forking huge drivers or libraries for a 2 line
delta between QEMU and CloudHv.
So let's decree that future CloudHv contributions that follow this old
pattern will not be considered until/unless there is some confidence
on our part that there is a long term plan in motion that cleans this
all up and repays the accumulated technical debt.
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113836): https://edk2.groups.io/g/devel/message/113836
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
2024-01-15 17:52 ` Ard Biesheuvel
@ 2024-01-15 18:26 ` Thomas Barrett
2024-01-15 22:07 ` Anatol Belski
2024-01-16 6:40 ` Anatol Belski
0 siblings, 2 replies; 13+ messages in thread
From: Thomas Barrett @ 2024-01-15 18:26 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Anatol Belski, Gerd Hoffmann, Jianyong Wu, Jiewen Yao,
Laszlo Ersek, Rob Bradford, devel
[-- Attachment #1: Type: text/plain, Size: 5724 bytes --]
Hey all,
Yeah I don’t disagree Laszlo. While developing these patches, It was pretty
confusing to determine which parts of PlatformLibInit were compatible with
CloudHv and which parts weren’t.
I have no idea how much work would be involved in splitting out
cloud-hypervisor code. It would be good to hear input from the
cloud-hypervisor maintainers on this.
Best,
Thomas
On Mon, Jan 15, 2024 at 9:52 AM Ard Biesheuvel <ardb@kernel.org> wrote:
> On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek <lersek@redhat.com> wrote:
> >
> > On 1/12/24 19:31, Thomas Barrett wrote:
> > > This series adds support for cloud-hypervisor guests with >1TiB
> > > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > > still occuring for CloudHv guests because the PlatformScanE820
> > > utility is not currently supported for CloudHv.
> > >
> > > My working branch for these changes can be found here:
> > > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> > >
> > > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > > Cc: Laszlo Ersek <lersek@redhat.com>
> > > Cc: Rob Bradford <rbradford@rivosinc.com>
> > >
> > > Thomas Barrett (3):
> > > OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
> > > OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
> > > OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> > >
> > > OvmfPkg/CloudHv/CloudHvX64.dsc | 2 +
> > > OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107 ++++++++++++++------
> > > 2 files changed, 79 insertions(+), 30 deletions(-)
> > >
> >
> > Thanks for posting v3, this one looks well-formed, so I can make some
> > comments.
> >
> > TBH, I'm super uncomfortable with a new bunch of CLOUDHV_DEVICE_ID
> > branches introduced to PlatformInitLib.
> >
> > OVMF supports multiple hypervisors, and the general idea has been that a
> > single module (or even a single platform DSC) should preferably not
> > attempt to support multiple hypervisors. That's why we have a separate
> > Xen platform, and a big number of Xen-customized, standalone modules.
> >
> > The idea with this is that any particular developer is very unlikely to
> > develop for, and test on, multiple hypervisors. Therefore unification (=
> > elimination of all possible code duplication) between distinct
> > hypervisor code snippets is actually detrimental for maintenance,
> > because it technically enables a developer to regress a platform that
> > they never actively work with.
> >
> > I believe bhyve is similarly separated out (just like Xen).
> >
> > It's one thing to collect support for multiple board types (machine
> > types) for QEMU into a given module; that's still the same hypervisor --
> > testing only requires a different "-M" option on the qemu command line.
> >
> > But fusing Cloud Hypervisor code with QEMU code makes me fidget in my
> seat.
> >
> > I've now grepped the OvmfPkg directory tree for existent instances of
> > CLOUDHV_DEVICE_ID, and I'm very much not liking the result list. It has
> > seeped into a whole bunch of modules that should only be QEMU. If you
> > need those modules customized for CloudHv, it'd be best to detach them
> > for CloudHv, and eliminate the dead code (such as anything that depends
> > on QEMU fw_cfg), and *enforce* that the underlying platform is CloudHv.
> > Both communities will benefit. Again, this is all based on the empirical
> > fact that QEMU and CloudHv developers don't tend to cross-develop.
> >
> > I understand that it's always just a small addition; in each specific
> > case, it's just one or two more "if"s in common code. But the end result
> > is terrible to maintain in the long term.
> >
> > Of course this all depends on having a separate platform DSC for
> > CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
> >
> > So I'd suggest (a) isolating current CloudHv logic to new library
> > instances, drivers, etc, (b) adding this enhancement to CloudHv's own
> > instance of PlatformInitLib.
> >
> > Counter-arguments, objections etc welcome -- feel free to prove me wrong
> > (e.g. whatever I'm saying about prior art / status quo).
> >
> > Also I'm not necessarily blocking this patch set; if others don't mind
> > this, they can ACK it (and I won't NACK).
> >
>
> Thanks Laszlo.
>
> I'm afraid I seem to have made your last point moot :-)
>
> But I agree with your points above: EDK2 makes the fork&tweak approach
> very easy, so CloudHv can keep its own versions of many of the QEMU
> specific glue libraries. It does, however, require a certain degree of
> hygiene on the part of the developer to introduce abstractions where
> possible, to avoid forking huge drivers or libraries for a 2 line
> delta between QEMU and CloudHv.
>
> So let's decree that future CloudHv contributions that follow this old
> pattern will not be considered until/unless there is some confidence
> on our part that there is a long term plan in motion that cleans this
> all up and repays the accumulated technical debt.
>
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113842): https://edk2.groups.io/g/devel/message/113842
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
[-- Attachment #2: Type: text/html, Size: 7998 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
2024-01-15 16:13 ` Ard Biesheuvel
@ 2024-01-15 19:10 ` Laszlo Ersek
0 siblings, 0 replies; 13+ messages in thread
From: Laszlo Ersek @ 2024-01-15 19:10 UTC (permalink / raw)
To: Ard Biesheuvel, Gerd Hoffmann
Cc: Thomas Barrett, devel, Anatol Belski, Jianyong Wu, Jiewen Yao,
Rob Bradford
On 1/15/24 17:13, Ard Biesheuvel wrote:
> On Mon, 15 Jan 2024 at 16:32, Ard Biesheuvel <ardb@kernel.org> wrote:
>>
>> On Mon, 15 Jan 2024 at 16:27, Gerd Hoffmann <kraxel@redhat.com> wrote:
>>>
>>> On Fri, Jan 12, 2024 at 06:31:41PM +0000, Thomas Barrett wrote:
>>>> This series adds support for cloud-hypervisor guests with >1TiB
>>>> of RAM to Ovmf. This bug was solved for Qemu back in 2017
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
>>>> still occuring for CloudHv guests because the PlatformScanE820
>>>> utility is not currently supported for CloudHv.
>>>>
>>>> My working branch for these changes can be found here:
>>>> https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
>>>>
>>>> Cc: Anatol Belski <anbelski@linux.microsoft.com>
>>>> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
>>>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>>>> Cc: Jianyong Wu <jianyong.wu@arm.com>
>>>> Cc: Jiewen Yao <jiewen.yao@intel.com>
>>>> Cc: Laszlo Ersek <lersek@redhat.com>
>>>> Cc: Rob Bradford <rbradford@rivosinc.com>
>>>>
>>>> Thomas Barrett (3):
>>>> OvmfPkg: Add CloudHv support to PlatformScanE820 utility function.
>>>> OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv
>>>> OvmfPkg: CloudHv: Enable PcdUse1GPageTable
>>>
>>> Series:
>>> Acked-by: Gerd Hoffmann <kraxel@redhat.com>
>>>
>>
>> Thanks, I've queued this up.
>
> Merged as #5262
>
I was late to the party, but that's not a problem; as I said I was going
to be, I am fine with this being approved by other reviewers.
Thanks
Laszlo
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113845): https://edk2.groups.io/g/devel/message/113845
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
2024-01-15 18:26 ` Thomas Barrett
@ 2024-01-15 22:07 ` Anatol Belski
2024-01-16 6:40 ` Anatol Belski
1 sibling, 0 replies; 13+ messages in thread
From: Anatol Belski @ 2024-01-15 22:07 UTC (permalink / raw)
To: Thomas Barrett, Ard Biesheuvel
Cc: Gerd Hoffmann, Jianyong Wu, Jiewen Yao, Laszlo Ersek,
Rob Bradford, devel
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113850): https://edk2.groups.io/g/devel/message/113850
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM
2024-01-15 18:26 ` Thomas Barrett
2024-01-15 22:07 ` Anatol Belski
@ 2024-01-16 6:40 ` Anatol Belski
1 sibling, 0 replies; 13+ messages in thread
From: Anatol Belski @ 2024-01-16 6:40 UTC (permalink / raw)
To: devel, tbarrett1200, Ard Biesheuvel
Cc: Gerd Hoffmann, Jianyong Wu, Jiewen Yao, Laszlo Ersek,
Rob Bradford
[-- Attachment #1: Type: text/plain, Size: 6547 bytes --]
Hi,
i was late to the party as well. At least i went now for a manual build
and some tests with Cloud Hypervisor and it looks good.
On Mon, 2024-01-15 at 10:26 -0800, Thomas Barrett wrote:
> Hey all,
>
> Yeah I don’t disagree Laszlo. While developing these patches, It was
> pretty confusing to determine which parts of PlatformLibInit were
> compatible with CloudHv and which parts weren’t.
>
> I have no idea how much work would be involved in splitting out
> cloud-hypervisor code. It would be good to hear input from the cloud-
> hypervisor maintainers on this.
>
Such a refactoring appears to be for sure meaningful. We should create
an issue on the Cloud Hypervisor side so this topic isn't lost and so
we can track it there, too.
Thanks!
Anatol
> Best,
> Thomas
>
> On Mon, Jan 15, 2024 at 9:52 AM Ard Biesheuvel <ardb@kernel.org>
> wrote:
> > On Mon, 15 Jan 2024 at 18:40, Laszlo Ersek <lersek@redhat.com>
> > wrote:
> > >
> > > On 1/12/24 19:31, Thomas Barrett wrote:
> > > > This series adds support for cloud-hypervisor guests with >1TiB
> > > > of RAM to Ovmf. This bug was solved for Qemu back in 2017
> > > > https://bugzilla.redhat.com/show_bug.cgi?id=1468526. The bug is
> > > > still occuring for CloudHv guests because the PlatformScanE820
> > > > utility is not currently supported for CloudHv.
> > > >
> > > > My working branch for these changes can be found here:
> > > > https://github.com/thomasbarrett/edk2/tree/cloud-hv-1tb-ram
> > > >
> > > > Cc: Anatol Belski <anbelski@linux.microsoft.com>
> > > > Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> > > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > > Cc: Jianyong Wu <jianyong.wu@arm.com>
> > > > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > > > Cc: Laszlo Ersek <lersek@redhat.com>
> > > > Cc: Rob Bradford <rbradford@rivosinc.com>
> > > >
> > > > Thomas Barrett (3):
> > > > OvmfPkg: Add CloudHv support to PlatformScanE820 utility
> > function.
> > > > OvmfPkg: Update PlatformAddressWidthInitialization for
> > CloudHv
> > > > OvmfPkg: CloudHv: Enable PcdUse1GPageTable
> > > >
> > > > OvmfPkg/CloudHv/CloudHvX64.dsc | 2 +
> > > > OvmfPkg/Library/PlatformInitLib/MemDetect.c | 107
> > ++++++++++++++------
> > > > 2 files changed, 79 insertions(+), 30 deletions(-)
> > > >
> > >
> > > Thanks for posting v3, this one looks well-formed, so I can make
> > some
> > > comments.
> > >
> > > TBH, I'm super uncomfortable with a new bunch of
> > CLOUDHV_DEVICE_ID
> > > branches introduced to PlatformInitLib.
> > >
> > > OVMF supports multiple hypervisors, and the general idea has been
> > that a
> > > single module (or even a single platform DSC) should preferably
> > not
> > > attempt to support multiple hypervisors. That's why we have a
> > separate
> > > Xen platform, and a big number of Xen-customized, standalone
> > modules.
> > >
> > > The idea with this is that any particular developer is very
> > unlikely to
> > > develop for, and test on, multiple hypervisors. Therefore
> > unification (=
> > > elimination of all possible code duplication) between distinct
> > > hypervisor code snippets is actually detrimental for maintenance,
> > > because it technically enables a developer to regress a platform
> > that
> > > they never actively work with.
> > >
> > > I believe bhyve is similarly separated out (just like Xen).
> > >
> > > It's one thing to collect support for multiple board types
> > (machine
> > > types) for QEMU into a given module; that's still the same
> > hypervisor --
> > > testing only requires a different "-M" option on the qemu command
> > line.
> > >
> > > But fusing Cloud Hypervisor code with QEMU code makes me fidget
> > in my seat.
> > >
> > > I've now grepped the OvmfPkg directory tree for existent
> > instances of
> > > CLOUDHV_DEVICE_ID, and I'm very much not liking the result list.
> > It has
> > > seeped into a whole bunch of modules that should only be QEMU. If
> > you
> > > need those modules customized for CloudHv, it'd be best to detach
> > them
> > > for CloudHv, and eliminate the dead code (such as anything that
> > depends
> > > on QEMU fw_cfg), and *enforce* that the underlying platform is
> > CloudHv.
> > > Both communities will benefit. Again, this is all based on the
> > empirical
> > > fact that QEMU and CloudHv developers don't tend to cross-
> > develop.
> > >
> > > I understand that it's always just a small addition; in each
> > specific
> > > case, it's just one or two more "if"s in common code. But the end
> > result
> > > is terrible to maintain in the long term.
> > >
> > > Of course this all depends on having a separate platform DSC for
> > > CloudHv, but that one already exists: "CloudHv/CloudHvX64.dsc".
> > >
> > > So I'd suggest (a) isolating current CloudHv logic to new library
> > > instances, drivers, etc, (b) adding this enhancement to CloudHv's
> > own
> > > instance of PlatformInitLib.
> > >
> > > Counter-arguments, objections etc welcome -- feel free to prove
> > me wrong
> > > (e.g. whatever I'm saying about prior art / status quo).
> > >
> > > Also I'm not necessarily blocking this patch set; if others don't
> > mind
> > > this, they can ACK it (and I won't NACK).
> > >
> >
> > Thanks Laszlo.
> >
> > I'm afraid I seem to have made your last point moot :-)
> >
> > But I agree with your points above: EDK2 makes the fork&tweak
> > approach
> > very easy, so CloudHv can keep its own versions of many of the QEMU
> > specific glue libraries. It does, however, require a certain degree
> > of
> > hygiene on the part of the developer to introduce abstractions
> > where
> > possible, to avoid forking huge drivers or libraries for a 2 line
> > delta between QEMU and CloudHv.
> >
> > So let's decree that future CloudHv contributions that follow this
> > old
> > pattern will not be considered until/unless there is some
> > confidence
> > on our part that there is a long term plan in motion that cleans
> > this
> > all up and repays the accumulated technical debt.
>
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#113866): https://edk2.groups.io/g/devel/message/113866
Mute This Topic: https://groups.io/mt/103689730/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
[-- Attachment #2: Type: text/html, Size: 8585 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2024-01-16 6:40 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-12 18:31 [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 1/3] OvmfPkg: Add CloudHv support to PlatformScanE820 utility function Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 2/3] OvmfPkg: Update PlatformAddressWidthInitialization for CloudHv Thomas Barrett
2024-01-12 18:31 ` [edk2-devel] [PATCH v3 3/3] OvmfPkg: CloudHv: Enable PcdUse1GPageTable Thomas Barrett
2024-01-15 15:27 ` [edk2-devel] [PATCH v3 0/3] Support CloudHv guests with >1TB of RAM Gerd Hoffmann
2024-01-15 15:32 ` Ard Biesheuvel
2024-01-15 16:13 ` Ard Biesheuvel
2024-01-15 19:10 ` Laszlo Ersek
2024-01-15 17:39 ` Laszlo Ersek
2024-01-15 17:52 ` Ard Biesheuvel
2024-01-15 18:26 ` Thomas Barrett
2024-01-15 22:07 ` Anatol Belski
2024-01-16 6:40 ` Anatol Belski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox