From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: "Laszlo Ersek" <lersek@redhat.com>, Aurélien <amlabs@free.fr>
Cc: edk2-devel@lists.01.org
Subject: Re: bootloader only see first disk
Date: Mon, 17 Sep 2018 10:34:32 +0200 [thread overview]
Message-ID: <0fbf7745-3899-f2e6-fe9c-e8935cdc1aed@proxmox.com> (raw)
In-Reply-To: <3614532a-fd89-0e9d-2435-530e343024a6@redhat.com>
Hi Laszlo,
On 9/16/18 12:04 PM, Laszlo Ersek wrote:
> Hello Aurélien,
>
> adding Thomas to the CC list, and commenting at the bottom:
thanks for CC'ing and thanks to you Aurélien for writing your report.
>
> On 09/15/18 18:36, Aurélien wrote:
>> Hi,
>>
>> I don't know if it is the right place to report a bug, sorry if not.
>>
>> I'm a Promox user and the last version (5.2) has a package versionned
>> 20180612-1 actually (
>> https://git.proxmox.com/?p=pve-edk2-firmware.git;a=summary ) which is
>> based on 5a56c0493955cf55e7eef96dbba815cfbb067d7d commit of edk2. A
>> build from the current master a few days ago has the same problem. The
>> previous Promox version (5.1 whith this pve-qemu-kvm package) use a
>> edk2 version dating from 2017 or before (it could be from before
>> September 2016 but I'm not sure), which has no problem. I tested on my
>> workstation running gentoo which has a version based on UDK2017 branch
>> with source code from February 11, 2018 and it's working too. So I
>> think the problem is located in the OVMF_CODE.fd code from 2018 (but
>> without any certainty).
>>
>> Actual behaviour (with all the various Linux distribution: Debian 9,
>> Ubuntu 18.04, Centos 7.5 ...) with a VM which has EFI enabled:
>> - grub is loaded and only see the first disk (where it is installed),
>> the "ls" command in the grub shell return: "(hd0) (hd0,gpt1)
>> (cd0)" and there is also hd1 and hd2 and in my case this is
>> problematic as /boot is on hd1.
>> - when pressing "escape" on the boot to go the the setup screen and
>> just chose "continue" it's ok (grub sees all the disks), my current
>> workaround to boot the installed OS.
>> - when installing an OS like Ubuntu, CentOS ... on the first reboot
>> it's ok (I thinks something in the installation, like seting a new
>> boot entry with grub-install, efibootmgr)
>>
>> Expected behaviour:
>> - grub see all the disk (the ls command return hd0, hd1 hd2 ....) and
>> not the first one (where the EFI boot partition is located).
>> (sorry, I haven't tried any other boot loader)
>>
>> - Step to reproduce:
>> start a VM with 3 disks and EFI enabled (OVMF_CODE), install a Linux
>> on the 2nd disk (/ & swap) and the EFI boot partition on the first
>> disk (/boot/efi on "sda1"), cold start the VM.
>>
>> I can help by testing with specific version (or another boot loader),
>> tell me.
>>
>> Regards
>>
>> Aurélien
>>
>> ps: typical command line to start the VM:
>> /usr/bin/kvm \
>> -id 20 \
>> -name testvm.mydomain \
>> -chardev 'socket,id=qmp,path=/var/run/qemu-server/20.qmp,server,nowait' \
>> -mon 'chardev=qmp,mode=control' \
>> -pidfile /var/run/qemu-server/20.pid \
>> -daemonize \
>> -smbios 'type=1,uuid=88dc4e6b-34b8-46a7-a573-26596c34855d' \
>> -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
>> -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,file=/mnt/pve/pve-nfs-sas/images/20/vm-20-disk-4.raw' \
>> -smp '8,sockets=1,cores=8,maxcpus=8' \
>> -nodefaults \
>> -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
>> -vga qxl \
>> -vnc unix:/var/run/qemu-server/20.vnc,x509,password \
>> -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi \
>> -m 512 \
>> -object 'memory-backend-ram,id=ram-node0,size=512M' \
>> -numa 'node,nodeid=0,cpus=0-7,memdev=ram-node0' \
>> -k fr \
>> -object 'iothread,id=iothread-virtioscsi1' \
>> -object 'iothread,id=iothread-virtioscsi2' \
>> -machine 'type=pc-i440fx-2.9' \
>> -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' \
>> -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
>> -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
>> -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
>> -chardev 'socket,path=/var/run/qemu-server/20.qga,server,nowait,id=qga0' \
>> -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \
>> -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \
>> -spice 'tls-port=61004,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' \
>> -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' \
>> -chardev 'spicevmc,id=vdagent,name=vdagent' \
>> -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' \
>> -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
>> -iscsi 'initiator-name=iqn.1993-08.org.debian:01:f9e4631950da' \
>> -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' \
>> -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=100' \
>> -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1' \
>> -drive 'file=/mnt/pve/pve-nfs-sas/images/20/vm-20-disk-1.raw,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' \
>> -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=200' \
>> -device 'virtio-scsi-pci,id=virtioscsi1,bus=pci.3,addr=0x2,iothread=iothread-virtioscsi1' \
>> -drive 'file=/mnt/pve/pve-nfs-sas/images/20/vm-20-disk-2.raw,if=none,id=drive-scsi1,format=raw,cache=none,aio=native,detect-zeroes=on' \
>> -device 'scsi-hd,bus=virtioscsi1.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' \
>> -device 'virtio-scsi-pci,id=virtioscsi2,bus=pci.3,addr=0x3,iothread=iothread-virtioscsi2' \
>> -drive 'file=/mnt/pve/pve-nfs-sas/images/20/vm-20-disk-3.raw,if=none,id=drive-scsi2,format=raw,cache=none,aio=native,detect-zeroes=on' \
>> -device 'scsi-hd,bus=virtioscsi2.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2' \
>> -netdev 'type=tap,id=net0,ifname=tap20i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
>> -device 'virtio-net-pci,mac=4A:63:6D:11:11:11,netdev=net0,bus=pci.0,addr=0x12,id=net0'
>
> This is a very good issue report, thank you for it. The amount of detail
> you provided allows me to explain the status & behavior to you.
>
> (1) Originally, OVMF used to connect all drivers to all devices in the
> system. In a sense this was convenient, however it resulted in a lot of
> wasted work too -- if you had tens or hundreds of devices, the boot
> could take extremely long. And again, most of that work was wasted,
> since you almost never want to actually attempt booting off of tens or
> hundreds of devices. You only really want to connect those devices that
> you reasonably want to attempt booting off of as well.
>
> (In the above, "connect" is used in the sense "bind" or "probe". Esp.
> with a Linux background, "probe" is the most fitting word, probably.)
>
>
> (2) Because the set of devices connected at boot is platform policy
> (i.e., UEFI platforms are at liberty to decide what devices they
> connect), we changed the above behavior to a minimalistic approach, in
> the following patch series:
>
> [edk2] [PATCH 0/6] OvmfPkg, ArmVirtQemu: leaner platform BDS policy for connecting devices
> http://mid.mail-archive.com/20180313212233.19215-1-lersek@redhat.com
>
> This was then pushed as commit range 12957e56d26d..ff1d0fbfbaec, in
> March 2018.
>
> With this set in place, if you provided *some* devices with the
> 'bootindex=N' property, then OVMF would no longer connect all devices in
> the system, it would connect only those that you marked with the
> bootindex property. (And then OVMF would auto-generate UEFI boot options
> for those only, as well.)
>
> In other words, the bootindex property was promoted from its previous
> "boot option filtering/reordering only" role to a "steering of device
> probing" role as well.
>
Ah yes, I should have figured that out, I even mentioned the faster boot
on multiple devices in the package changelog but never wondered about what
this implies for setups like Aurélien has, argh!
>
> (3) The EFI build of grub most likely doesn't find your disks -- after
> (2) -- because it doesn't connect devices itself, it relies on the
> platform firmware to connect disks.
>
>
> (4) If you interrupt the boot process and enter the setup utility, the
> latter will connect all devices, regardless of (2). This is your current
> workaround, IIUC.
>
>
> (5) There are many classes of devices, and some of those should be
> connected even if they are not "bootable" at all. The series in (2)
> caused a regression for them, and we had to fix them up gradually. See
> the following commits:
>
> - OvmfPkg/PlatformBootManagerLib: add USB keyboard to ConIn
> - OvmfPkg/PlatformBootManagerLib: connect consoles unconditionally
> - OvmfPkg/PlatformBootManagerLib: connect Virtio RNG devices again
>
> However, the disks that you mention are *not* like this; they should
> indeed *not* be connected unless you mark them for booting.
>
>
> (6) The solution is either to instruct grub (I don't know how) to
> connect the disks in question itself, *or* to add the "bootindex=N"
> properties (with some suitable N values) to the disk devices that you
> want OVMF to connect for you (even if they contain no EFI system
> partition and/or a UEFI boot loader application). Given your current
> command line, the latter option translates to the "scsi-hd" devices with
> identifiers "scsi1" and "scsi2". For example:
>
> -device scsi-hd,bus=virtioscsi1.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,bootindex=201 \
> -device scsi-hd,bus=virtioscsi2.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2,bootindex=202 \
So we probably want to add a per-disk 'bootable' flag in our frontend,
which adds a higher bootindex (= lower priority) to all marked disks.
This would allow to still boot faster with lots of disks but allows
setups like yours, Aurélien.
>
> Hope this helps,
Yes, it helps a lot! Much thanks for your explanation, appreciated!
cheers,
Thomas
next prev parent reply other threads:[~2018-09-17 8:34 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-15 16:36 bootloader only see first disk Aurélien
2018-09-16 10:04 ` Laszlo Ersek
2018-09-17 8:34 ` Thomas Lamprecht [this message]
2018-09-17 10:44 ` Aurelien Minet
2018-09-17 10:28 ` Aurelien
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0fbf7745-3899-f2e6-fe9c-e8935cdc1aed@proxmox.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox