From: "Gerd Hoffmann" <kraxel@redhat.com>
To: "Corvin Köhne" <C.Koehne@beckhoff.com>
Cc: "devel@edk2.groups.io" <devel@edk2.groups.io>,
Ard Biesheuvel <ardb+tianocore@kernel.org>,
Jiewen Yao <jiewen.yao@intel.com>,
Jordan Justen <jordan.l.justen@intel.com>,
Rebecca Cran <rebecca@bsdio.com>,
Peter Grehan <grehan@freebsd.org>,
FreeBSD Virtualization <freebsd-virtualization@freebsd.org>
Subject: Re: [edk2-devel] [PATCH 0/1] OvmfPkg/Bhyve: QemuFwCfg support
Date: Tue, 29 Mar 2022 13:30:52 +0200 [thread overview]
Message-ID: <20220329113052.pspiin3rvtnyygmb@sirius.home.kraxel.org> (raw)
In-Reply-To: <ff1769bd124140d799c2fd7917004a6a@beckhoff.com>
On Tue, Mar 29, 2022 at 09:57:40AM +0000, Corvin Köhne wrote:
> Hi Gerd,
>
> > There is FW_CFG_NB_CPUS + FW_CFG_MAX_CPUS. ovmf uses different names,
> > see OvmfPkg/Include/IndustryStandard/QemuFwCfg.h
> >
> > PlatformPei for qemu uses QemuFwCfgItemSmpCpuCount aka FW_CFG_NB_CPUS,
> > which is the number of cpus which are online.
> >
> > I think FW_CFG_MAX_CPUS is basically unused these days. It played a
> > role back when seabios created the acpi tables for cpu hotplug as
> > described in the comment above. In qemu 2.0 & newer the acpi tables are
> > generated by qemu instead. The firmware just downloads them from fw_cfg
> > and installs them for the OS, it doesn't need to know virtual machine
> > configuration details any more.
>
> The FwCfgItem of this patch is used by bhyve to build the MADT. So, it's
> similar to the use case of FW_CFG_MAX_CPUS. At the moment, I'm using
> an additional bhyve specific FwCfgItem. I just want to ask, if it makes sense
> to use FW_CFG_MAX_CPUS to avoid two items for the same purpose or to
> keep it as is.
Given that FW_CFG_MAX_CPUS is unused on qemu these days it is unlikely
that reusing it causes problems. IIRC the problems mentioned in the
comment only matter with VMs having > 255 vCPUs because somewhere only
one byte was used for the cpu / apic index.
But I think I'd tend to keep the bhyve-specific behavior nevertheless,
so you don't have to worry about qemu quirks.
Or go the qemu route and generate the acpi tables on the host instead.
When you generate the acpi tables in the guest firmware you always have
the problem that you need to pass all the virtual machine configuration
information needed to generate the tables to the firmware. The
information needed changes over time when new features are added, which
requires protocol updates, which in turn requires lockstep updates of
hypervisor and firmware to deploy the new features ...
HTH & take care,
Gerd
next prev parent reply other threads:[~2022-03-29 11:31 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-29 6:54 [PATCH 0/1] OvmfPkg/Bhyve: QemuFwCfg support Corvin Köhne
2022-03-29 6:54 ` [PATCH 1/1] OvmfPkg/BhyveBhfPkg: add support for QemuFwCfg Corvin Köhne
2022-03-29 9:24 ` [edk2-devel] " Gerd Hoffmann
2022-03-29 9:59 ` Corvin Köhne
2022-03-29 9:14 ` [edk2-devel] [PATCH 0/1] OvmfPkg/Bhyve: QemuFwCfg support Gerd Hoffmann
2022-03-29 9:57 ` Corvin Köhne
2022-03-29 11:30 ` Gerd Hoffmann [this message]
2022-03-29 11:53 ` Corvin Köhne
2022-03-29 13:35 ` Gerd Hoffmann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220329113052.pspiin3rvtnyygmb@sirius.home.kraxel.org \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox