From: "Gerd Hoffmann via groups.io" <kraxel=redhat.com@groups.io>
To: mitchell.augustin@canonical.com
Cc: devel@edk2.groups.io
Subject: Re: [edk2-devel] [BUG] Extremely slow boot times with CPU and GPU passthrough and host phys-bits > 40
Date: Tue, 26 Nov 2024 09:09:54 +0100 [thread overview]
Message-ID: <vat4llx23hm5jqzwnakqy23tpuf4j5txoas5quhhccdt26hs3k@odtje24gj3nm> (raw)
In-Reply-To: <14717.1732564682653358784@groups.io>
On Mon, Nov 25, 2024 at 11:58:02AM -0800, via groups.io wrote:
> Thanks.
>
> > That is extremely slow. How does /proc/iomem look like? Anything overlapping the ECAM maybe?
>
> Slow and fast guests' and host's /proc/iomem outputs are attached. For
> the fast guest, I also included the mapping after a reboot with
> `pci=realloc pci=nocrs` set, since that is the config that actually
> allows the driver to load. I don't see any regions labeled "PCI ECAM",
> not sure if that's an issue or if it might just appear as something
> else on some configs.
It's MMCONFIG, I think older kernels name it that way instead of ECAM.
Looks normal.
> I am not sure if there's a way to force bus 0000:08 to use more of the
> overall MMIO window space after boot,
When hotplugging devices you might need the 'pref64-reserve=<size>'
property for '-device pcie-root-port' to make the bridge window larger.
For devices present at boot this is not needed because OVMF can figure
that automatically by looking at the bar sizes of the pci devices.
> Thinking ahead here: hypothetically, if I were to propose a patch to
> add a knob for this similar to X-PciMmio64Mb to MemDetect.c, do you
> think it could be acceptable? It seems that the immediately viable
> workaround for our specific use case would be to disable
> PlatformDynamicMMIOWindow via a qemu option, and if this is an issue
> with many large BAR Nvidia GPUs, it could be broadly useful until the
> root issue is fixed in the kernel. I already patched and tested a knob
> for this in a local build, and it works (and shouldn't introduce any
> regressions, since omission of the flag would just mean PDMW gets
> called as it does today.)
I think it would be better to just give the PciMmio64Mb config hints
higher priority, i.e. if they are present (and don't conflict with
installed memory) simply use them as-is.
But I'd also like to figure what the exact problem is. Ideally OVMF
should just work without requiring manual configuration. One of the
reasons to add the dynamic mmio window was to allow pci devices with
large bars to work fine without manual tuning ...
What is the difference between the slow dynamic mmio window
configuration and the fast manual PciMmio64Mb configuration?
Can you try to change the manual configuration to be more like the
dynamic mmio window configuration, one change at a time (first size,
next base, possibly multiple base addresses) and see where exactly it
breaks?
Does it make a difference if you add a iommu to the virtual machine?
> From guest VM that booted quickly, has 52 physbits/57 virtual bits, and in which GPU driver does not work since `pci=realloc pci=nocrs isn't set:
>
> e0000000-efffffff : PCI MMCONFIG 0000 [bus 00-ff]
> e0000000-efffffff : Reserved
> e0000000-efffffff : pnp 00:04
ECAM with old name.
> From guest VM that booted slowly, has 52 physbits/57 virtual bits, and in which GPU driver works correctly:
>
> 380000000000-3937ffffffff : PCI Bus 0000:00
> 380000000000-382002ffffff : PCI Bus 0000:06
> 380000000000-381fffffffff : 0000:06:00.0
> 382000000000-382001ffffff : 0000:06:00.0
> 382002000000-382002ffffff : 0000:06:00.0
This is the NPU I guess?
[ host /proc/iomem below ]
> 210000000000-21ffffffffff : PCI Bus 0000:15
> 21a000000000-21e047ffffff : PCI Bus 0000:16
> 21a000000000-21e047ffffff : PCI Bus 0000:17
> 21a000000000-21e042ffffff : PCI Bus 0000:19
> 21a000000000-21e042ffffff : PCI Bus 0000:1a
> 21a000000000-21e042ffffff : PCI Bus 0000:1b
> 21a000000000-21bfffffffff : 0000:1b:00.0
> 21c000000000-21dfffffffff : 0000:1b:00.0
> 21e000000000-21e03fffffff : 0000:1b:00.0
> 21e040000000-21e041ffffff : 0000:1b:00.0
> 21e042000000-21e042ffffff : 0000:1b:00.0
Hmm. Looks like the device has more resources on the host.
Maybe *that* is the problem.
What does 'sudo lspci -v' print for the NPU on the host and
in the guest?
take care,
Gerd
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#120844): https://edk2.groups.io/g/devel/message/120844
Mute This Topic: https://groups.io/mt/109651206/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
next prev parent reply other threads:[~2024-11-26 8:10 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-14 16:46 [edk2-devel] [BUG] Extremely slow boot times with CPU and GPU passthrough and host phys-bits > 40 mitchell.augustin via groups.io
2024-11-18 21:14 ` xpahos via groups.io
2024-11-19 22:25 ` mitchell.augustin via groups.io
2024-11-20 9:35 ` xpahos via groups.io
2024-11-20 11:26 ` Gerd Hoffmann via groups.io
2024-11-20 15:20 ` mitchell.augustin via groups.io
2024-11-20 20:00 ` mitchell.augustin via groups.io
2024-11-21 12:32 ` Gerd Hoffmann via groups.io
2024-11-22 0:23 ` mitchell.augustin via groups.io
2024-11-22 10:35 ` Gerd Hoffmann via groups.io
2024-11-22 17:38 ` Brian J. Johnson via groups.io
2024-11-22 22:32 ` mitchell.augustin via groups.io
2024-11-24 2:05 ` mitchell.augustin via groups.io
2024-11-25 11:47 ` Gerd Hoffmann via groups.io
2024-11-25 19:58 ` mitchell.augustin via groups.io
2024-11-26 8:09 ` Gerd Hoffmann via groups.io [this message]
2024-11-26 22:27 ` mitchell.augustin via groups.io
2024-12-04 14:56 ` mitchell.augustin via groups.io
2024-11-25 11:18 ` Gerd Hoffmann via groups.io
2024-11-18 21:32 ` Ard Biesheuvel via groups.io
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=vat4llx23hm5jqzwnakqy23tpuf4j5txoas5quhhccdt26hs3k@odtje24gj3nm \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox