public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [PATCH v3 0/4] OvmfPkg: CSM boot fixes
@ 2019-06-26 11:37 David Woodhouse
  2019-06-26 11:37 ` [PATCH v3 1/4] OvmfPkg/LegacyBios: set NumberBbsEntries to the size of BbsTable David Woodhouse
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: David Woodhouse @ 2019-06-26 11:37 UTC (permalink / raw)
  To: devel; +Cc: Laszlo Ersek

For v3, leaving out the cosmetic parts that touch code outside OvmfPkg. 
This series is now purely the correctness fixes within OvmfPkg which are 
required to make CSM boots work properly again.

The first two patches allow NVMe and VirtIO disks to be used as Legacy
boot targets, since nobody really uses IDE any more.

The third avoids using QemuVideoDxe when we have CSM, as the INT 10h 
shim installed by QemuVideoDxe conflicts with a real legacy video BIOS
being installed.

Finally, avoid placing PCI BARs above 4GiB. Strictly speaking we only 
need this for PCI devices which might be natively supported by the CSM 
BIOS, like NVMe. Devices with an OpRom already get special-cased to stay 
below 4GiB. But an IncompatiblePciDeviceSupportProtocol implementation 
doesn't get to see the PCI device class; only the vendor/device IDs so 
we can't use it for that purpose to downgrade more selectively. Instead, 
just default to putting everything below 4GiB.


David Woodhouse (4):
  OvmfPkg/LegacyBios: set NumberBbsEntries to the size of BbsTable
  OvmfPkg/LegacyBbs: Add boot entries for VirtIO and NVME devices
  OvmfPkg: Don't build in QemuVideoDxe when we have CSM
  OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled

 OvmfPkg/Csm/LegacyBiosDxe/LegacyBbs.c  | 157 ++++++++++++++++++++++++-
 OvmfPkg/Csm/LegacyBiosDxe/LegacyBios.c |   7 +-
 OvmfPkg/OvmfPkgIa32.dsc                |   2 +
 OvmfPkg/OvmfPkgIa32.fdf                |   5 +-
 OvmfPkg/OvmfPkgIa32X64.dsc             |   6 +
 OvmfPkg/OvmfPkgIa32X64.fdf             |   5 +-
 OvmfPkg/OvmfPkgX64.dsc                 |   6 +
 OvmfPkg/OvmfPkgX64.fdf                 |   5 +-
 8 files changed, 179 insertions(+), 14 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/4] OvmfPkg/LegacyBios: set NumberBbsEntries to the size of BbsTable
  2019-06-26 11:37 [PATCH v3 0/4] OvmfPkg: CSM boot fixes David Woodhouse
@ 2019-06-26 11:37 ` David Woodhouse
  2019-06-26 11:37 ` [PATCH v3 2/4] OvmfPkg/LegacyBbs: Add boot entries for VirtIO and NVME devices David Woodhouse
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: David Woodhouse @ 2019-06-26 11:37 UTC (permalink / raw)
  To: devel; +Cc: Laszlo Ersek

This is hard-coded in the IntThunk structure, and the additional entries
will be needed for other devices like VirtIO and NVMe disks. So admit
that they exist.

Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
---
 OvmfPkg/Csm/LegacyBiosDxe/LegacyBios.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/OvmfPkg/Csm/LegacyBiosDxe/LegacyBios.c b/OvmfPkg/Csm/LegacyBiosDxe/LegacyBios.c
index 05e3ffd2bbb8..5e795bfe6570 100644
--- a/OvmfPkg/Csm/LegacyBiosDxe/LegacyBios.c
+++ b/OvmfPkg/Csm/LegacyBiosDxe/LegacyBios.c
@@ -565,12 +565,13 @@ ShadowAndStartLegacy16 (
 
   EfiToLegacy16BootTable->BbsTable  = (UINT32)(UINTN)BbsTable;
   Private->BbsTablePtr              = (VOID *) BbsTable;
+
   //
-  // Skip Floppy and possible onboard IDE drives
+  // Populate entire table with BBS_IGNORE_ENTRY
   //
-  EfiToLegacy16BootTable->NumberBbsEntries = 1 + 2 * MAX_IDE_CONTROLLER;
+  EfiToLegacy16BootTable->NumberBbsEntries = MAX_BBS_ENTRIES;
 
-  for (Index = 0; Index < (sizeof (Private->IntThunk->BbsTable) / sizeof (BBS_TABLE)); Index++) {
+  for (Index = 0; Index < MAX_BBS_ENTRIES; Index++) {
     BbsTable[Index].BootPriority = BBS_IGNORE_ENTRY;
   }
   //
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 2/4] OvmfPkg/LegacyBbs: Add boot entries for VirtIO and NVME devices
  2019-06-26 11:37 [PATCH v3 0/4] OvmfPkg: CSM boot fixes David Woodhouse
  2019-06-26 11:37 ` [PATCH v3 1/4] OvmfPkg/LegacyBios: set NumberBbsEntries to the size of BbsTable David Woodhouse
@ 2019-06-26 11:37 ` David Woodhouse
  2019-06-26 11:37 ` [PATCH v3 3/4] OvmfPkg: Don't build in QemuVideoDxe when we have CSM David Woodhouse
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: David Woodhouse @ 2019-06-26 11:37 UTC (permalink / raw)
  To: devel; +Cc: Laszlo Ersek

Iterate over the available block devices in much the same way as
BdsLibEnumerateAllBootOption() does, but limiting to those devices
which are PCI-backed, which can be represented in the BbsTable.

One day we might need to extend the BbsTable to allow us to distinguish
between different NVMe namespaces on a device.

Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
---
 OvmfPkg/Csm/LegacyBiosDxe/LegacyBbs.c | 157 +++++++++++++++++++++++++-
 1 file changed, 152 insertions(+), 5 deletions(-)

diff --git a/OvmfPkg/Csm/LegacyBiosDxe/LegacyBbs.c b/OvmfPkg/Csm/LegacyBiosDxe/LegacyBbs.c
index 6b1dd344f344..5e4c7a249ef1 100644
--- a/OvmfPkg/Csm/LegacyBiosDxe/LegacyBbs.c
+++ b/OvmfPkg/Csm/LegacyBiosDxe/LegacyBbs.c
@@ -140,10 +140,14 @@ LegacyBiosBuildBbs (
   IN  BBS_TABLE                 *BbsTable
   )
 {
-  UINTN     BbsIndex;
-  HDD_INFO  *HddInfo;
-  UINTN     HddIndex;
-  UINTN     Index;
+  UINTN       BbsIndex;
+  HDD_INFO    *HddInfo;
+  UINTN       HddIndex;
+  UINTN       Index;
+  EFI_HANDLE  *BlockIoHandles;
+  UINTN       NumberBlockIoHandles;
+  UINTN       BlockIndex;
+  EFI_STATUS  Status;
 
   //
   // First entry is floppy.
@@ -252,8 +256,151 @@ LegacyBiosBuildBbs (
     }
   }
 
+  //
+  // Add non-IDE block devices
+  //
+  BbsIndex = HddIndex * 2 + 1;
+
+  Status = gBS->LocateHandleBuffer (
+                  ByProtocol,
+                  &gEfiBlockIoProtocolGuid,
+                  NULL,
+                  &NumberBlockIoHandles,
+                  &BlockIoHandles
+                  );
+
+  if (!EFI_ERROR (Status)) {
+    UINTN                     Removable;
+    EFI_BLOCK_IO_PROTOCOL     *BlkIo;
+    EFI_PCI_IO_PROTOCOL       *PciIo;
+    EFI_DEVICE_PATH_PROTOCOL  *DevicePath;
+    EFI_DEVICE_PATH_PROTOCOL  *DevicePathNode;
+    EFI_HANDLE                PciHandle;
+    UINTN                     SegNum;
+    UINTN                     BusNum;
+    UINTN                     DevNum;
+    UINTN                     FuncNum;
+
+    for (Removable = 0; Removable < 2; Removable++) {
+      for (BlockIndex = 0; BlockIndex < NumberBlockIoHandles; BlockIndex++) {
+        Status = gBS->HandleProtocol (
+                        BlockIoHandles[BlockIndex],
+                        &gEfiBlockIoProtocolGuid,
+                        (VOID **) &BlkIo
+                        );
+
+        if (EFI_ERROR (Status)) {
+          continue;
+        }
+
+        //
+        // Skip the logical partitions
+        //
+        if (BlkIo->Media->LogicalPartition) {
+          continue;
+        }
+
+        //
+        // Skip the fixed block io then the removable block io
+        //
+        if (BlkIo->Media->RemovableMedia == ((Removable == 0) ? FALSE : TRUE)) {
+          continue;
+        }
+
+        //
+        // Get Device Path
+        //
+        Status = gBS->HandleProtocol (
+                        BlockIoHandles[BlockIndex],
+                        &gEfiDevicePathProtocolGuid,
+                        (VOID **) &DevicePath
+                        );
+        if (EFI_ERROR (Status)) {
+          continue;
+        }
+
+        //
+        // Skip ATA devices as they have already been handled
+        //
+        DevicePathNode = DevicePath;
+        while (!IsDevicePathEnd (DevicePathNode)) {
+          if (DevicePathType (DevicePathNode) == MESSAGING_DEVICE_PATH &&
+              DevicePathSubType (DevicePathNode) == MSG_ATAPI_DP) {
+            break;
+          }
+          DevicePathNode = NextDevicePathNode (DevicePathNode);
+        }
+        if (!IsDevicePathEnd (DevicePathNode)) {
+            continue;
+        }
+
+        //
+        //  Locate which PCI device
+        //
+        Status = gBS->LocateDevicePath (
+                        &gEfiPciIoProtocolGuid,
+                        &DevicePath,
+                        &PciHandle
+                        );
+        if (EFI_ERROR (Status)) {
+          continue;
+        }
+
+        Status = gBS->HandleProtocol (
+                        PciHandle,
+                        &gEfiPciIoProtocolGuid,
+                        (VOID **) &PciIo
+                        );
+        if (EFI_ERROR (Status)) {
+          continue;
+        }
+
+        Status = PciIo->GetLocation (
+                          PciIo,
+                          &SegNum,
+                          &BusNum,
+                          &DevNum,
+                          &FuncNum
+                          );
+        if (EFI_ERROR (Status)) {
+          continue;
+        }
+
+        if (SegNum != 0) {
+          DEBUG ((DEBUG_WARN, "CSM cannot use PCI devices in segment %Lu\n",
+            (UINT64) SegNum));
+          continue;
+        }
+
+        DEBUG ((DEBUG_INFO, "Add Legacy Bbs entry for PCI %d/%d/%d\n",
+          BusNum, DevNum, FuncNum));
+
+        BbsTable[BbsIndex].Bus                      = BusNum;
+        BbsTable[BbsIndex].Device                   = DevNum;
+        BbsTable[BbsIndex].Function                 = FuncNum;
+        BbsTable[BbsIndex].Class                    = 1;
+        BbsTable[BbsIndex].SubClass                 = 0x80;
+        BbsTable[BbsIndex].StatusFlags.OldPosition  = 0;
+        BbsTable[BbsIndex].StatusFlags.Reserved1    = 0;
+        BbsTable[BbsIndex].StatusFlags.Enabled      = 0;
+        BbsTable[BbsIndex].StatusFlags.Failed       = 0;
+        BbsTable[BbsIndex].StatusFlags.MediaPresent = 0;
+        BbsTable[BbsIndex].StatusFlags.Reserved2    = 0;
+        BbsTable[BbsIndex].DeviceType               = BBS_HARDDISK;
+        BbsTable[BbsIndex].BootPriority             = BBS_UNPRIORITIZED_ENTRY;
+        BbsIndex++;
+
+        if (BbsIndex == MAX_BBS_ENTRIES) {
+          Removable = 2;
+          break;
+        }
+      }
+    }
+
+    FreePool (BlockIoHandles);
+  }
+
   return EFI_SUCCESS;
-
 }
 
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 3/4] OvmfPkg: Don't build in QemuVideoDxe when we have CSM
  2019-06-26 11:37 [PATCH v3 0/4] OvmfPkg: CSM boot fixes David Woodhouse
  2019-06-26 11:37 ` [PATCH v3 1/4] OvmfPkg/LegacyBios: set NumberBbsEntries to the size of BbsTable David Woodhouse
  2019-06-26 11:37 ` [PATCH v3 2/4] OvmfPkg/LegacyBbs: Add boot entries for VirtIO and NVME devices David Woodhouse
@ 2019-06-26 11:37 ` David Woodhouse
  2019-06-26 11:37 ` [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled David Woodhouse
  2019-06-26 13:19 ` [PATCH v3 0/4] OvmfPkg: CSM boot fixes Laszlo Ersek
  4 siblings, 0 replies; 11+ messages in thread
From: David Woodhouse @ 2019-06-26 11:37 UTC (permalink / raw)
  To: devel; +Cc: Laszlo Ersek

QemuVideoDxe installs its own legacy INT 10h handler for the benefit of
systems like Windows 2008r2 which attempt to use INT 10h even when booted
via EFI.

This interacts extremely badly with a CSM actually attempting to install
a real video BIOS.

The last thing done before invoking a legacy OpROM is to call INT 10h to
set a plain text mode. In the case where it's the video BIOS OpROM being
loaded, INT 10h will normally point to an iret stub in the CSM itself.

Unless QemuVideoDxe has changed INT10h to point to a location in the
0xC0000 segment that it didn't allocate properly, so the real OpROM has
been shadowed over them top of it, and the INT 10h vector now points to
some random place in the middle of the newly-shadowed OpROM.

Don't Do That Then. QemuVideoDxe doesn't do any acceleration and just
sets up a linear framebuffer, so we don't lose much by just
unconditionally using BiosVideoDxe instead when CSM is present.

Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
---
 OvmfPkg/OvmfPkgIa32.dsc    | 2 ++
 OvmfPkg/OvmfPkgIa32.fdf    | 5 +++--
 OvmfPkg/OvmfPkgIa32X64.dsc | 2 ++
 OvmfPkg/OvmfPkgIa32X64.fdf | 5 +++--
 OvmfPkg/OvmfPkgX64.dsc     | 2 ++
 OvmfPkg/OvmfPkgX64.fdf     | 5 +++--
 6 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/OvmfPkg/OvmfPkgIa32.dsc b/OvmfPkg/OvmfPkgIa32.dsc
index 473eaba24613..87716123997a 100644
--- a/OvmfPkg/OvmfPkgIa32.dsc
+++ b/OvmfPkg/OvmfPkgIa32.dsc
@@ -738,7 +738,9 @@ [Components]
   MdeModulePkg/Universal/DisplayEngineDxe/DisplayEngineDxe.inf
   MdeModulePkg/Universal/MemoryTest/NullMemoryTestDxe/NullMemoryTestDxe.inf
 
+!ifndef $(CSM_ENABLE)
   OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf
+!endif
   OvmfPkg/QemuRamfbDxe/QemuRamfbDxe.inf
   OvmfPkg/VirtioGpuDxe/VirtioGpu.inf
 
diff --git a/OvmfPkg/OvmfPkgIa32.fdf b/OvmfPkg/OvmfPkgIa32.fdf
index 14100356b4f6..785affeb90c8 100644
--- a/OvmfPkg/OvmfPkgIa32.fdf
+++ b/OvmfPkg/OvmfPkgIa32.fdf
@@ -305,9 +305,10 @@ [FV.DXEFV]
 INF  OvmfPkg/Csm/BiosThunk/VideoDxe/VideoDxe.inf
 INF  OvmfPkg/Csm/LegacyBiosDxe/LegacyBiosDxe.inf
 INF  RuleOverride=CSM OvmfPkg/Csm/Csm16/Csm16.inf
-!endif
-
+!else
 INF  OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf
+!endif
+
 INF  OvmfPkg/QemuRamfbDxe/QemuRamfbDxe.inf
 INF  OvmfPkg/VirtioGpuDxe/VirtioGpu.inf
 INF  OvmfPkg/PlatformDxe/Platform.inf
diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
index 73f33b721832..639e33cb285f 100644
--- a/OvmfPkg/OvmfPkgIa32X64.dsc
+++ b/OvmfPkg/OvmfPkgIa32X64.dsc
@@ -747,7 +747,9 @@ [Components.X64]
   MdeModulePkg/Universal/DisplayEngineDxe/DisplayEngineDxe.inf
   MdeModulePkg/Universal/MemoryTest/NullMemoryTestDxe/NullMemoryTestDxe.inf
 
+!ifndef $(CSM_ENABLE)
   OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf
+!endif
   OvmfPkg/QemuRamfbDxe/QemuRamfbDxe.inf
   OvmfPkg/VirtioGpuDxe/VirtioGpu.inf
 
diff --git a/OvmfPkg/OvmfPkgIa32X64.fdf b/OvmfPkg/OvmfPkgIa32X64.fdf
index 5af3cdc93d20..74407072563b 100644
--- a/OvmfPkg/OvmfPkgIa32X64.fdf
+++ b/OvmfPkg/OvmfPkgIa32X64.fdf
@@ -311,9 +311,10 @@ [FV.DXEFV]
 INF  OvmfPkg/Csm/BiosThunk/VideoDxe/VideoDxe.inf
 INF  OvmfPkg/Csm/LegacyBiosDxe/LegacyBiosDxe.inf
 INF  RuleOverride=CSM OvmfPkg/Csm/Csm16/Csm16.inf
-!endif
-
+!else
 INF  OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf
+!endif
+
 INF  OvmfPkg/QemuRamfbDxe/QemuRamfbDxe.inf
 INF  OvmfPkg/VirtioGpuDxe/VirtioGpu.inf
 INF  OvmfPkg/PlatformDxe/Platform.inf
diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
index 39ac79156529..69a3497c2c9e 100644
--- a/OvmfPkg/OvmfPkgX64.dsc
+++ b/OvmfPkg/OvmfPkgX64.dsc
@@ -745,7 +745,9 @@ [Components]
   MdeModulePkg/Universal/DisplayEngineDxe/DisplayEngineDxe.inf
   MdeModulePkg/Universal/MemoryTest/NullMemoryTestDxe/NullMemoryTestDxe.inf
 
+!ifndef $(CSM_ENABLE)
   OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf
+!endif
   OvmfPkg/QemuRamfbDxe/QemuRamfbDxe.inf
   OvmfPkg/VirtioGpuDxe/VirtioGpu.inf
 
diff --git a/OvmfPkg/OvmfPkgX64.fdf b/OvmfPkg/OvmfPkgX64.fdf
index 5af3cdc93d20..74407072563b 100644
--- a/OvmfPkg/OvmfPkgX64.fdf
+++ b/OvmfPkg/OvmfPkgX64.fdf
@@ -311,9 +311,10 @@ [FV.DXEFV]
 INF  OvmfPkg/Csm/BiosThunk/VideoDxe/VideoDxe.inf
 INF  OvmfPkg/Csm/LegacyBiosDxe/LegacyBiosDxe.inf
 INF  RuleOverride=CSM OvmfPkg/Csm/Csm16/Csm16.inf
-!endif
-
+!else
 INF  OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf
+!endif
+
 INF  OvmfPkg/QemuRamfbDxe/QemuRamfbDxe.inf
 INF  OvmfPkg/VirtioGpuDxe/VirtioGpu.inf
 INF  OvmfPkg/PlatformDxe/Platform.inf
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled
  2019-06-26 11:37 [PATCH v3 0/4] OvmfPkg: CSM boot fixes David Woodhouse
                   ` (2 preceding siblings ...)
  2019-06-26 11:37 ` [PATCH v3 3/4] OvmfPkg: Don't build in QemuVideoDxe when we have CSM David Woodhouse
@ 2019-06-26 11:37 ` David Woodhouse
  2019-06-26 13:19 ` [PATCH v3 0/4] OvmfPkg: CSM boot fixes Laszlo Ersek
  4 siblings, 0 replies; 11+ messages in thread
From: David Woodhouse @ 2019-06-26 11:37 UTC (permalink / raw)
  To: devel; +Cc: Laszlo Ersek

Mostly, this is only necessary for devices that the CSM might have
native support for, such as VirtIO and NVMe; PciBusDxe will already
degrade devices to 32-bit if they have an OpROM.

However, there doesn't seem to be a generic way of requesting PciBusDxe
to downgrade specific devices.

There's IncompatiblePciDeviceSupportProtocol but that doesn't provide
the PCI class information or a handle to the device itself, so there's
no simple way to just match on all NVMe devices, for example.

Just leave gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size set to zero for
CSM builds, until/unless that can be fixed.

Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
---
 OvmfPkg/OvmfPkgIa32X64.dsc | 4 ++++
 OvmfPkg/OvmfPkgX64.dsc     | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
index 639e33cb285f..ad20531ceb8b 100644
--- a/OvmfPkg/OvmfPkgIa32X64.dsc
+++ b/OvmfPkg/OvmfPkgIa32X64.dsc
@@ -543,7 +543,11 @@ [PcdsDynamicDefault]
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Base|0x0
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Size|0x0
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base|0x0
+!ifdef $(CSM_ENABLE)
+  gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x0
+!else
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x800000000
+!endif
 
   gEfiMdePkgTokenSpaceGuid.PcdPlatformBootTimeOut|0
 
diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
index 69a3497c2c9e..0542ac2235b4 100644
--- a/OvmfPkg/OvmfPkgX64.dsc
+++ b/OvmfPkg/OvmfPkgX64.dsc
@@ -542,7 +542,11 @@ [PcdsDynamicDefault]
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Base|0x0
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Size|0x0
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base|0x0
+!ifdef $(CSM_ENABLE)
+  gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x0
+!else
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x800000000
+!endif
 
   gEfiMdePkgTokenSpaceGuid.PcdPlatformBootTimeOut|0
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/4] OvmfPkg: CSM boot fixes
  2019-06-26 11:37 [PATCH v3 0/4] OvmfPkg: CSM boot fixes David Woodhouse
                   ` (3 preceding siblings ...)
  2019-06-26 11:37 ` [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled David Woodhouse
@ 2019-06-26 13:19 ` Laszlo Ersek
  4 siblings, 0 replies; 11+ messages in thread
From: Laszlo Ersek @ 2019-06-26 13:19 UTC (permalink / raw)
  To: David Woodhouse, devel

On 06/26/19 13:37, David Woodhouse wrote:
> For v3, leaving out the cosmetic parts that touch code outside OvmfPkg. 
> This series is now purely the correctness fixes within OvmfPkg which are 
> required to make CSM boots work properly again.
> 
> The first two patches allow NVMe and VirtIO disks to be used as Legacy
> boot targets, since nobody really uses IDE any more.
> 
> The third avoids using QemuVideoDxe when we have CSM, as the INT 10h 
> shim installed by QemuVideoDxe conflicts with a real legacy video BIOS
> being installed.
> 
> Finally, avoid placing PCI BARs above 4GiB. Strictly speaking we only 
> need this for PCI devices which might be natively supported by the CSM 
> BIOS, like NVMe. Devices with an OpRom already get special-cased to stay 
> below 4GiB. But an IncompatiblePciDeviceSupportProtocol implementation 
> doesn't get to see the PCI device class; only the vendor/device IDs so 
> we can't use it for that purpose to downgrade more selectively. Instead, 
> just default to putting everything below 4GiB.
> 
> 
> David Woodhouse (4):
>   OvmfPkg/LegacyBios: set NumberBbsEntries to the size of BbsTable
>   OvmfPkg/LegacyBbs: Add boot entries for VirtIO and NVME devices
>   OvmfPkg: Don't build in QemuVideoDxe when we have CSM
>   OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled
> 
>  OvmfPkg/Csm/LegacyBiosDxe/LegacyBbs.c  | 157 ++++++++++++++++++++++++-
>  OvmfPkg/Csm/LegacyBiosDxe/LegacyBios.c |   7 +-
>  OvmfPkg/OvmfPkgIa32.dsc                |   2 +
>  OvmfPkg/OvmfPkgIa32.fdf                |   5 +-
>  OvmfPkg/OvmfPkgIa32X64.dsc             |   6 +
>  OvmfPkg/OvmfPkgIa32X64.fdf             |   5 +-
>  OvmfPkg/OvmfPkgX64.dsc                 |   6 +
>  OvmfPkg/OvmfPkgX64.fdf                 |   5 +-
>  8 files changed, 179 insertions(+), 14 deletions(-)
> 

Series pushed as commit range 2f3435c2343f..c7341877f695.

Thank you!
Laszlo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled
@ 2019-06-27 16:36 graf
  2019-06-27 18:43 ` Laszlo Ersek
  0 siblings, 1 reply; 11+ messages in thread
From: graf @ 2019-06-27 16:36 UTC (permalink / raw)
  To: lersek; +Cc: devel, David Woodhouse, Ard Biesheuvel

Hi David and Laszlo,

(with broken threading because gmane still mirrors the old ML ...)

> Mostly, this is only necessary for devices that the CSM might have
> native support for, such as VirtIO and NVMe; PciBusDxe will already
> degrade devices to 32-bit if they have an OpROM.
> 
> However, there doesn't seem to be a generic way of requesting PciBusDxe
> to downgrade specific devices.
> 
> There's IncompatiblePciDeviceSupportProtocol but that doesn't provide
> the PCI class information or a handle to the device itself, so there's
> no simple way to just match on all NVMe devices, for example.
> 
> Just leave gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size set to zero for
> CSM builds, until/unless that can be fixed.
> 
> Signed-off-by: David Woodhouse <dwmw2@...>
> Reviewed-by: Laszlo Ersek <lersek@...>
> ---
>  OvmfPkg/OvmfPkgIa32X64.dsc | 4 ++++
>  OvmfPkg/OvmfPkgX64.dsc     | 4 ++++
>  2 files changed, 8 insertions(+)
> 
> diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
> index 639e33cb285f..ad20531ceb8b 100644
> --- a/OvmfPkg/OvmfPkgIa32X64.dsc
> +++ b/OvmfPkg/OvmfPkgIa32X64.dsc
> @@ -543,7 +543,11 @@ [PcdsDynamicDefault]
>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Base|0x0
>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Size|0x0
>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base|0x0
> +!ifdef $(CSM_ENABLE)
> +  gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x0
> +!else
>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x800000000
> +!endif
>  
>    gEfiMdePkgTokenSpaceGuid.PcdPlatformBootTimeOut|0
>  
> diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
> index 69a3497c2c9e..0542ac2235b4 100644
> --- a/OvmfPkg/OvmfPkgX64.dsc
> +++ b/OvmfPkg/OvmfPkgX64.dsc
> @@ -542,7 +542,11 @@ [PcdsDynamicDefault]
>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Base|0x0
>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Size|0x0
>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base|0x0
> +!ifdef $(CSM_ENABLE)
> +  gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x0

IIRC x86 Linux just takes firmware provided BAR maps as they are and 
doesn't map on its own. Or does it map if a BAR was previously unmapped?

In the former case, wouldn't that mean that we're breaking GPU 
passthrough (*big* BARs) for OVMF if the OVMF version happens to support 
CSM? So if a distro decides to turn on CSM, that would be a very subtle 
regression.

Would it be possible to change the PCI mapping logic to just simply 
*prefer* low BAR space if there's some available and the BAR is not big 
(<64MB for example)?

That way we could have CSM enabled OVMF for everyone ;)


Alex


> +!else
>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x800000000
> +!endif
>  
>    gEfiMdePkgTokenSpaceGuid.PcdPlatformBootTimeOut|0
>  
> -- 
> 2.21.0

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled
  2019-06-27 16:36 [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled graf
@ 2019-06-27 18:43 ` Laszlo Ersek
  2019-06-27 19:01   ` Laszlo Ersek
  0 siblings, 1 reply; 11+ messages in thread
From: Laszlo Ersek @ 2019-06-27 18:43 UTC (permalink / raw)
  To: Alexander Graf; +Cc: devel, David Woodhouse, Ard Biesheuvel

On 06/27/19 18:36, Alexander Graf wrote:
> Hi David and Laszlo,
> 
> (with broken threading because gmane still mirrors the old ML ...)
> 
>> Mostly, this is only necessary for devices that the CSM might have
>> native support for, such as VirtIO and NVMe; PciBusDxe will already
>> degrade devices to 32-bit if they have an OpROM.
>>
>> However, there doesn't seem to be a generic way of requesting PciBusDxe
>> to downgrade specific devices.
>>
>> There's IncompatiblePciDeviceSupportProtocol but that doesn't provide
>> the PCI class information or a handle to the device itself, so there's
>> no simple way to just match on all NVMe devices, for example.
>>
>> Just leave gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size set to zero for
>> CSM builds, until/unless that can be fixed.
>>
>> Signed-off-by: David Woodhouse <dwmw2@...>
>> Reviewed-by: Laszlo Ersek <lersek@...>
>> ---
>>  OvmfPkg/OvmfPkgIa32X64.dsc | 4 ++++
>>  OvmfPkg/OvmfPkgX64.dsc     | 4 ++++
>>  2 files changed, 8 insertions(+)
>>
>> diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
>> index 639e33cb285f..ad20531ceb8b 100644
>> --- a/OvmfPkg/OvmfPkgIa32X64.dsc
>> +++ b/OvmfPkg/OvmfPkgIa32X64.dsc
>> @@ -543,7 +543,11 @@ [PcdsDynamicDefault]
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Base|0x0
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Size|0x0
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base|0x0
>> +!ifdef $(CSM_ENABLE)
>> +  gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x0
>> +!else
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x800000000
>> +!endif
>>  
>>    gEfiMdePkgTokenSpaceGuid.PcdPlatformBootTimeOut|0
>>  
>> diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
>> index 69a3497c2c9e..0542ac2235b4 100644
>> --- a/OvmfPkg/OvmfPkgX64.dsc
>> +++ b/OvmfPkg/OvmfPkgX64.dsc
>> @@ -542,7 +542,11 @@ [PcdsDynamicDefault]
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Base|0x0
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio32Size|0x0
>>    gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base|0x0
>> +!ifdef $(CSM_ENABLE)
>> +  gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size|0x0
> 
> IIRC x86 Linux just takes firmware provided BAR maps as they are and
> doesn't map on its own.

That's correct.


> Or does it map if a BAR was previously unmapped?

My understanding is that Linux re-maps the BARs if it dislikes something
(e.g. root bridge apertures described in ACPI _CRS do not cover some
ranges programmed into actual BARs).

IIUC reallocation can be requested on the kernel cmdline as well, with
pci=realloc.

I believe you could test your question with the "pci-testdev" QEMU
device model -- in QEMU commit 417463341e3e ("pci-testdev: add optional
memory bar", 2018-11-05), Gerd added the "membar" property for just that
(IIRC).


> In the former case, wouldn't that mean that we're breaking GPU
> passthrough (*big* BARs) for OVMF if the OVMF version happens to support
> CSM? So if a distro decides to turn on CSM, that would be a very subtle
> regression.

Yes, this is in theory a possible regression. It requires the user to
combine huge BARs with an OVMF build that includes the CSM.

I've been aware of this, but it seems like such a corner case that I
didn't intend to raise it. To begin with, building OVMF with the CSM is
a niche use case in itself.

David described (but I've forgotten the details, by now) some kind of
setup or service where a user cannot choose between pure SeaBIOS and
pure OVMF, for their virtual machine. They are given just one firmware,
and so in order to let users boot both legacy and UEFI OSes, it makes
sense for the service provider to offer OVMF+CSM.

Fine -- but, in that kind of service, where users are prevented from
picking one of two "pure" firmwares, do we really expect users to have
the configuration freedom to shove GPUs with huge BARs into their VMs?


> Would it be possible to change the PCI mapping logic to just simply
> *prefer* low BAR space if there's some available and the BAR is not big
> (<64MB for example)?

PciBusDxe in MdeModulePkg is practically untouchable, at such a
"strategy" level. We can fix bugs in it, but only surgically. (This is
not something that I endorse, I'm just observing it.)

Platforms are expected to influence the behavior of PciBusDxe through
implementing the "incompatible pci device support" protocol. OVMF
already does that (IncompatiblePciDeviceSupportDxe), but the protocol
interface (from the PI spec) is not flexible enough for what David
actually wanted. Otherwise, this restriction would have been expressed
per-controller.

If the problem that you describe above outweighs the issue that David
intends to mitigate with the patch, in a given service, then the vendor
can rebuild OVMF with a suitable "--pcd=..." option. Or else, they can
even use

  -fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=32768

dynamically, on the QEMU command line. (Please see commit 7e5b1b670c38,
"OvmfPkg: PlatformPei: determine the 64-bit PCI host aperture for X64
DXE", 2016-03-23.)


> That way we could have CSM enabled OVMF for everyone ;)

Well, as long as we're discussing "everyone": we should forget about the
CSM altogether, in the long term. The CSM is a concession towards OSes
that are stuck in the past; a concession that is hugely complex and
difficult to debug & maintain. It is also incompatible with Secure Boot.
Over time, we should spend less and less time & energy on the CSM. Just
my opinion, of course. :)

Thanks
Laszlo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled
  2019-06-27 18:43 ` Laszlo Ersek
@ 2019-06-27 19:01   ` Laszlo Ersek
  2019-06-27 19:16     ` David Woodhouse
  0 siblings, 1 reply; 11+ messages in thread
From: Laszlo Ersek @ 2019-06-27 19:01 UTC (permalink / raw)
  To: Alexander Graf; +Cc: devel, David Woodhouse, Ard Biesheuvel

On 06/27/19 20:43, Laszlo Ersek wrote:
> On 06/27/19 18:36, Alexander Graf wrote:

>> That way we could have CSM enabled OVMF for everyone ;)
> 
> Well, as long as we're discussing "everyone": we should forget about the
> CSM altogether, in the long term. The CSM is a concession towards OSes
> that are stuck in the past; a concession that is hugely complex and
> difficult to debug & maintain. It is also incompatible with Secure Boot.
> Over time, we should spend less and less time & energy on the CSM. Just
> my opinion, of course. :)

To clarify -- this is by no means to say that *SeaBIOS* is a relic. I
absolutely don't imply that. Users should use the firmware they need,
especially in the virtual world, where choosing is really easy.

But the *CSM* is just an elaborate workaround for the user, to
circumvent a decision that a platform vendor made for him/her
unsolicitedly. In the virtual world in particular, I don't think such
workarounds should be necessary; the platform vendor should *please* the
user, and give them SeaBIOS directly, if they want that.

(Again, just my opinion -- I just wanted to clarify that I wasn't taking
a stab at SeaBIOS. That would be *foolish*.)

Thanks
Laszlo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled
  2019-06-27 19:01   ` Laszlo Ersek
@ 2019-06-27 19:16     ` David Woodhouse
  2019-06-28 13:12       ` Laszlo Ersek
  0 siblings, 1 reply; 11+ messages in thread
From: David Woodhouse @ 2019-06-27 19:16 UTC (permalink / raw)
  To: Laszlo Ersek, Alexander Graf; +Cc: devel, Ard Biesheuvel

[-- Attachment #1: Type: text/plain, Size: 860 bytes --]

On Thu, 2019-06-27 at 21:01 +0200, Laszlo Ersek wrote:
> 
> To clarify -- this is by no means to say that *SeaBIOS* is a relic. I
> absolutely don't imply that. Users should use the firmware they need,
> especially in the virtual world, where choosing is really easy.

Choosing is easy if you're running qemu on your desktop box.

If the owner of the guest doesn't also own the host, then you're into
"configurability", which means guests either having to *explicitly*
manage which firmware they get which is a pain for the customer, or
some kind of half-arsed guessing about which firmware to use, based on
poking around in the image that's being booted. Which doesn't generally
go well either.

There's a reason Intel went for the one-size-fits-all on real hardware,
and it applies just as well to (some, but not all) virtual hosting too.


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled
  2019-06-27 19:16     ` David Woodhouse
@ 2019-06-28 13:12       ` Laszlo Ersek
  0 siblings, 0 replies; 11+ messages in thread
From: Laszlo Ersek @ 2019-06-28 13:12 UTC (permalink / raw)
  To: David Woodhouse, Alexander Graf; +Cc: devel, Ard Biesheuvel

On 06/27/19 21:16, David Woodhouse wrote:
> On Thu, 2019-06-27 at 21:01 +0200, Laszlo Ersek wrote:
>>
>> To clarify -- this is by no means to say that *SeaBIOS* is a relic. I
>> absolutely don't imply that. Users should use the firmware they need,
>> especially in the virtual world, where choosing is really easy.
>
> Choosing is easy if you're running qemu on your desktop box.
>
> If the owner of the guest doesn't also own the host, then you're into
> "configurability", which means guests either having to *explicitly*
> manage which firmware they get which is a pain for the customer,

Depends on the customer. They are capable of choosing the OS they want.
At least some of them are able to (and/or prefer to) choose the firmware
under their OS too.

In the opposite direction, the "libosinfo" project already tracks minute
details of guest OSes (for example, what version of the virtio spec the
OS's drivers support), so that users can get "optimized defaults", when
they create domains. Adding "preferred firmware" shouldn't be infeasible
IMO, if it were necessary.

> or some kind of half-arsed guessing about which firmware to use, based
> on poking around in the image that's being booted. Which doesn't
> generally go well either.
>
> There's a reason Intel went for the one-size-fits-all on real
> hardware, and it applies just as well to (some, but not all) virtual
> hosting too.

Quoting an earlier message from the list:

http://mid.mail-archive.com/4dfb6a1f-da8f-4141-8687-be968ff261a9@hpe.com

On 01/25/19 21:28, Brian J. Johnson wrote:
> In fall of 2017, Intel declared their intention to end legacy BIOS
> support on their platforms by 2020.
>
> http://www.uefi.org/sites/default/files/resources/Brian_Richardson_Intel_Final.pdf
>
>
> I believe they have stuck to this story at subsequent UEFI plugfests.

Anyway: CSM_ENABLE works again, thanks to you, so people can build it
usefully again. I'd just like to keep it default-off up-stream.

Thanks
Laszlo

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-06-28 13:12 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-06-26 11:37 [PATCH v3 0/4] OvmfPkg: CSM boot fixes David Woodhouse
2019-06-26 11:37 ` [PATCH v3 1/4] OvmfPkg/LegacyBios: set NumberBbsEntries to the size of BbsTable David Woodhouse
2019-06-26 11:37 ` [PATCH v3 2/4] OvmfPkg/LegacyBbs: Add boot entries for VirtIO and NVME devices David Woodhouse
2019-06-26 11:37 ` [PATCH v3 3/4] OvmfPkg: Don't build in QemuVideoDxe when we have CSM David Woodhouse
2019-06-26 11:37 ` [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled David Woodhouse
2019-06-26 13:19 ` [PATCH v3 0/4] OvmfPkg: CSM boot fixes Laszlo Ersek
  -- strict thread matches above, loose matches on Subject: below --
2019-06-27 16:36 [PATCH v3 4/4] OvmfPkg: don't assign PCI BARs above 4GiB when CSM enabled graf
2019-06-27 18:43 ` Laszlo Ersek
2019-06-27 19:01   ` Laszlo Ersek
2019-06-27 19:16     ` David Woodhouse
2019-06-28 13:12       ` Laszlo Ersek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox