public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Laszlo Ersek" <lersek@redhat.com>
To: Igor Druzhinin <igor.druzhinin@citrix.com>,
	devel@edk2.groups.io, xen-devel@lists.xenproject.org
Cc: jordan.l.justen@intel.com, ard.biesheuvel@arm.com,
	anthony.perard@citrix.com, julien@xen.org
Subject: Re: [PATCH] OvmfPkg/XenPlatformPei: Grab 64-bit PCI MMIO hole size from OVMF info table
Date: Mon, 11 Jan 2021 09:10:19 +0100	[thread overview]
Message-ID: <1bf69ab6-a183-f6d6-293b-1db2bea78fdc@redhat.com> (raw)
In-Reply-To: <1610336718-7064-1-git-send-email-igor.druzhinin@citrix.com>

On 01/11/21 04:45, Igor Druzhinin wrote:
> We faced a problem with passing through a PCI device with 64GB BAR to UEFI
> guest. The BAR is expectedly programmed into 64-bit PCI aperture at
> 64G address which pushes physical address space to 37 bits. That is above
> 36-bit width that OVMF exposes currently to a guest without tweaking
> PcdPciMmio64Size knob.
> 
> We can't really set this knob to a very high value and expect that to work
> on all CPUs as the max physical address width varies singnificantly between
> models. We also can't simply take CPU address width at runtime and use that
> as we need to cover all of that with indentity pages for DXE phase.
> 
> The exact size of upper PCI MMIO hole is only known after hvmloader placed
> all of the BARs and calculated the necessary aperture which is exposed
> through ACPI. This information is not readily available to OVMF at PEI phase.
> So let's expose it using the existing extensible binary interface between
> OVMF and hvmloader preserving compatibility.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
> 
> The change is backward compatible and has a companion change for hvmloader.
> 
> Are we still waiting to clean up Xen stuff from QEMU platform?

Yes, this BZ is still open:

  https://bugzilla.tianocore.org/show_bug.cgi?id=2122

That said...

> Or I need to duplicate my changed there (I hope not)?

... I hope not, as well.

(The automated solution of this issue remains unsolved for QEMU; we
sometimes revise it, but it's a very tough nut to crack. Users have been
able to tweak the aperture size with the experimental (no support
promised) fw_cfg file "opt/ovmf/X-PciMmio64Mb". But Xen has no fw_cfg,
so this patch certainly looks justified.)

Some style comments:

> 
> ---
>  OvmfPkg/XenPlatformPei/MemDetect.c |  6 ++++-
>  OvmfPkg/XenPlatformPei/Platform.h  |  8 +++++++
>  OvmfPkg/XenPlatformPei/Xen.c       | 47 ++++++++++++++++++++++++++++++++++++++
>  OvmfPkg/XenPlatformPei/Xen.h       | 12 +++++++++-
>  4 files changed, 71 insertions(+), 2 deletions(-)
> 
> diff --git a/OvmfPkg/XenPlatformPei/MemDetect.c b/OvmfPkg/XenPlatformPei/MemDetect.c
> index 1f81eee..4175a2f 100644
> --- a/OvmfPkg/XenPlatformPei/MemDetect.c
> +++ b/OvmfPkg/XenPlatformPei/MemDetect.c
> @@ -227,6 +227,7 @@ GetFirstNonAddress (
>    UINT64               FirstNonAddress;
>    UINT64               Pci64Base, Pci64Size;
>    RETURN_STATUS        PcdStatus;
> +  EFI_STATUS           Status;
>  
>    FirstNonAddress = BASE_4GB + GetSystemMemorySizeAbove4gb ();
>  
> @@ -245,7 +246,10 @@ GetFirstNonAddress (
>    // Otherwise, in order to calculate the highest address plus one, we must
>    // consider the 64-bit PCI host aperture too. Fetch the default size.
>    //
> -  Pci64Size = PcdGet64 (PcdPciMmio64Size);
> +  Status = XenGetPciMmioInfo (NULL, NULL, &Pci64Base, &Pci64Size);
> +  if (EFI_ERROR (Status)) {
> +    Pci64Size = PcdGet64 (PcdPciMmio64Size);
> +  }
>  
>    if (Pci64Size == 0) {
>      if (mBootMode != BOOT_ON_S3_RESUME) {
> diff --git a/OvmfPkg/XenPlatformPei/Platform.h b/OvmfPkg/XenPlatformPei/Platform.h
> index 7661f4a..6024cae 100644
> --- a/OvmfPkg/XenPlatformPei/Platform.h
> +++ b/OvmfPkg/XenPlatformPei/Platform.h
> @@ -127,6 +127,14 @@ XenGetE820Map (
>    UINT32 *Count
>    );
>  
> +EFI_STATUS
> +XenGetPciMmioInfo (
> +  UINT64 *PciBase,
> +  UINT64 *PciSize,
> +  UINT64 *Pci64Base,
> +  UINT64 *Pci64Size
> +  );
> +
>  extern EFI_BOOT_MODE mBootMode;
>  
>  extern UINT8 mPhysMemAddressWidth;
> diff --git a/OvmfPkg/XenPlatformPei/Xen.c b/OvmfPkg/XenPlatformPei/Xen.c
> index c41fecd..3c1970d 100644
> --- a/OvmfPkg/XenPlatformPei/Xen.c
> +++ b/OvmfPkg/XenPlatformPei/Xen.c
> @@ -114,6 +114,53 @@ XenGetE820Map (
>  }
>  
>  /**
> +  Returns layouts of PCI MMIO holes provided by hvmloader
> +
> +  @param PciBase      Pointer to MMIO hole base address
> +  @param PciSize      Pointer to MMIO hole size
> +  @param Pci64Base    Pointer to upper MMIO hole base address
> +  @param Pci64Size    Pointer to upper MMIO hole size
> +
> +  @return EFI_STATUS
> +**/
> +EFI_STATUS
> +XenGetPciMmioInfo (
> +  UINT64 *PciBase,
> +  UINT64 *PciSize,
> +  UINT64 *Pci64Base,
> +  UINT64 *Pci64Size
> +  )
> +{
> +  UINT64 *Tables;
> +  EFI_XEN_OVMF_PCI_INFO *PciInfo;
> +
> +  if (mXenHvmloaderInfo == NULL) {
> +    return EFI_NOT_FOUND;
> +  }
> +
> +  if (mXenHvmloaderInfo->TablesCount < OVMF_INFO_PCI_TABLE + 1) {
> +    return EFI_UNSUPPORTED;
> +  }
> +
> +  ASSERT (mXenHvmloaderInfo->Tables &&

(1) Please spell out:

  mXenHvmloaderInfo->Tables != 0

> +          mXenHvmloaderInfo->Tables < MAX_ADDRESS);
> +  Tables = (UINT64 *) mXenHvmloaderInfo->Tables;
> +  PciInfo = (EFI_XEN_OVMF_PCI_INFO *) Tables[OVMF_INFO_PCI_TABLE];
> +
> +  ASSERT (PciInfo && (EFI_PHYSICAL_ADDRESS) PciInfo < MAX_ADDRESS);

(2) Please spell out

  PciInfo != NULL

> +  if (PciBase && PciSize) {

(3) Same here -- please use explicit comparisons.

> +    *PciBase = (UINT64) PciInfo->LowStart;
> +    *PciSize = (UINT64) (PciInfo->LowEnd - PciInfo->LowStart);
> +  }
> +  if (Pci64Base && Pci64Size) {

(4) Ditto

I'll wait for feedback from the OvmfPkg Xen reviewers; from my side,
with the style warts fixed:

Acked-by: Laszlo Ersek <lersek@redhat.com>

Thanks
Laszlo

> +    *Pci64Base = (UINT64) PciInfo->HiStart;
> +    *Pci64Size = (UINT64) (PciInfo->HiEnd - PciInfo->HiStart);
> +  }
> +
> +  return EFI_SUCCESS;
> +}
> +
> +/**
>    Connects to the Hypervisor.
>  
>    @return EFI_STATUS
> diff --git a/OvmfPkg/XenPlatformPei/Xen.h b/OvmfPkg/XenPlatformPei/Xen.h
> index 2605481..c6e5fbb 100644
> --- a/OvmfPkg/XenPlatformPei/Xen.h
> +++ b/OvmfPkg/XenPlatformPei/Xen.h
> @@ -15,7 +15,7 @@ SPDX-License-Identifier: BSD-2-Clause-Patent
>  // Physical address of OVMF info
>  #define OVMF_INFO_PHYSICAL_ADDRESS 0x00001000
>  
> -// This structure must match the definition on Xen side
> +// These structures must match the definition on Xen side
>  #pragma pack(1)
>  typedef struct {
>    CHAR8 Signature[14]; // XenHVMOVMF\0
> @@ -34,6 +34,16 @@ typedef struct {
>    EFI_PHYSICAL_ADDRESS E820;
>    UINT32 E820EntriesCount;
>  } EFI_XEN_OVMF_INFO;
> +
> +// This extra table gives layout of PCI apertures in a Xen guest
> +#define OVMF_INFO_PCI_TABLE 0
> +
> +typedef struct {
> +  EFI_PHYSICAL_ADDRESS LowStart;
> +  EFI_PHYSICAL_ADDRESS LowEnd;
> +  EFI_PHYSICAL_ADDRESS HiStart;
> +  EFI_PHYSICAL_ADDRESS HiEnd;
> +} EFI_XEN_OVMF_PCI_INFO;
>  #pragma pack()
>  
>  #endif /* __XEN_H__ */
> 


  reply	other threads:[~2021-01-11  8:10 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-11  3:45 [PATCH] OvmfPkg/XenPlatformPei: Grab 64-bit PCI MMIO hole size from OVMF info table Igor Druzhinin
2021-01-11  8:10 ` Laszlo Ersek [this message]
2021-01-19 13:20 ` Anthony PERARD
2021-01-19 13:25   ` igor.druzhinin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1bf69ab6-a183-f6d6-293b-1db2bea78fdc@redhat.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox