public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Laszlo Ersek" <lersek@redhat.com>
To: Alexander Graf <graf@amazon.com>
Cc: devel@edk2.groups.io, David Woodhouse <dwmw2@infradead.org>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: Re: [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled
Date: Thu, 27 Jun 2019 20:43:56 +0200	[thread overview]
Message-ID: <b47ec57d-2c6f-7766-c5c4-b55edeb0a703@redhat.com> (raw)
In-Reply-To: <91d912c9-533a-22a3-4aa3-0fe114e1149f@amazon.com>

On 06/27/19 18:36, Alexander Graf wrote:
> Hi David and Laszlo,
> 
> (with broken threading because gmane still mirrors the old ML ...)
> 
>> Mostly, this is only necessary for devices that the CSM might have
>> native support for, such as VirtIO and NVMe; PciBusDxe will already
>> degrade devices to 32-bit if they have an OpROM.
>>
>> However, there doesn't seem to be a generic way of requesting PciBusDxe
>> to downgrade specific devices.
>>
>> There's IncompatiblePciDeviceSupportProtocol but that doesn't provide
>> the PCI class information or a handle to the device itself, so there's
>> no simple way to just match on all NVMe devices, for example.
>>
>> Just leave gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size set to zero for
>> CSM builds, until/unless that can be fixed.
>>
>> Signed-off-by: David Woodhouse <dwmw2@...>
>> Reviewed-by: Laszlo Ersek <lersek@...>
>> ---
>>  OvmfPkg/OvmfPkgIa32X64.dsc | 4 ++++
>>  OvmfPkg/OvmfPkgX64.dsc     | 4 ++++
>>  2 files changed, 8 insertions(+)
>>
>> diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
>> index 639e33cb285f..ad20531ceb8b 100644
>> --- a/OvmfPkg/OvmfPkgIa32X64.dsc
>> +++ b/OvmfPkg/OvmfPkgIa32X64.dsc
>> @@ -543,7 +543,11 @@ [PcdsDynamicDefault]
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Base|0x0
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Size|0x0
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base|0x0
>> +!ifdef $(CSM_ENABLE)
>> +  gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x0
>> +!else
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x800000000
>> +!endif
>>  
>>    gEfiMdePkgTokenSpaceGuid.PcdPlatformBootTimeOut|0
>>  
>> diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
>> index 69a3497c2c9e..0542ac2235b4 100644
>> --- a/OvmfPkg/OvmfPkgX64.dsc
>> +++ b/OvmfPkg/OvmfPkgX64.dsc
>> @@ -542,7 +542,11 @@ [PcdsDynamicDefault]
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Base|0x0
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Size|0x0
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base|0x0
>> +!ifdef $(CSM_ENABLE)
>> +  gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x0
> 
> IIRC x86 Linux just takes firmware provided BAR maps as they are and
> doesn't map on its own.

That's correct.


> Or does it map if a BAR was previously unmapped?

My understanding is that Linux re-maps the BARs if it dislikes something
(e.g. root bridge apertures described in ACPI _CRS do not cover some
ranges programmed into actual BARs).

IIUC reallocation can be requested on the kernel cmdline as well, with
pci=realloc.

I believe you could test your question with the "pci-testdev" QEMU
device model -- in QEMU commit 417463341e3e ("pci-testdev: add optional
memory bar", 2018-11-05), Gerd added the "membar" property for just that
(IIRC).


> In the former case, wouldn't that mean that we're breaking GPU
> passthrough (*big* BARs) for OVMF if the OVMF version happens to support
> CSM? So if a distro decides to turn on CSM, that would be a very subtle
> regression.

Yes, this is in theory a possible regression. It requires the user to
combine huge BARs with an OVMF build that includes the CSM.

I've been aware of this, but it seems like such a corner case that I
didn't intend to raise it. To begin with, building OVMF with the CSM is
a niche use case in itself.

David described (but I've forgotten the details, by now) some kind of
setup or service where a user cannot choose between pure SeaBIOS and
pure OVMF, for their virtual machine. They are given just one firmware,
and so in order to let users boot both legacy and UEFI OSes, it makes
sense for the service provider to offer OVMF+CSM.

Fine -- but, in that kind of service, where users are prevented from
picking one of two "pure" firmwares, do we really expect users to have
the configuration freedom to shove GPUs with huge BARs into their VMs?


> Would it be possible to change the PCI mapping logic to just simply
> *prefer* low BAR space if there's some available and the BAR is not big
> (<64MB for example)?

PciBusDxe in MdeModulePkg is practically untouchable, at such a
"strategy" level. We can fix bugs in it, but only surgically. (This is
not something that I endorse, I'm just observing it.)

Platforms are expected to influence the behavior of PciBusDxe through
implementing the "incompatible pci device support" protocol. OVMF
already does that (IncompatiblePciDeviceSupportDxe), but the protocol
interface (from the PI spec) is not flexible enough for what David
actually wanted. Otherwise, this restriction would have been expressed
per-controller.

If the problem that you describe above outweighs the issue that David
intends to mitigate with the patch, in a given service, then the vendor
can rebuild OVMF with a suitable "--pcd=..." option. Or else, they can
even use

  -fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=32768

dynamically, on the QEMU command line. (Please see commit 7e5b1b670c38,
"OvmfPkg: PlatformPei: determine the 64-bit PCI host aperture for X64
DXE", 2016-03-23.)


> That way we could have CSM enabled OVMF for everyone ;)

Well, as long as we're discussing "everyone": we should forget about the
CSM altogether, in the long term. The CSM is a concession towards OSes
that are stuck in the past; a concession that is hugely complex and
difficult to debug & maintain. It is also incompatible with Secure Boot.
Over time, we should spend less and less time & energy on the CSM. Just
my opinion, of course. :)

Thanks
Laszlo

  reply	other threads:[~2019-06-27 18:44 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-27 16:36 [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled graf
2019-06-27 18:43 ` Laszlo Ersek [this message]
2019-06-27 19:01   ` Laszlo Ersek
2019-06-27 19:16     ` David Woodhouse
2019-06-28 13:12       ` Laszlo Ersek
  -- strict thread matches above, loose matches on Subject: below --
2019-06-26 11:37 [PATCH v3 0/4] OvmfPkg: CSM boot fixes David Woodhouse
2019-06-26 11:37 ` [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled David Woodhouse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b47ec57d-2c6f-7766-c5c4-b55edeb0a703@redhat.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox