From: "Laszlo Ersek" <lersek@redhat.com>
To: devel@edk2.groups.io, spbrogan@outlook.com, discuss@edk2.groups.io
Subject: Re: [edk2-devel] OVMF/QEMU shell based unit tests and writing to a virtual disk
Date: Tue, 27 Oct 2020 14:26:32 +0100 [thread overview]
Message-ID: <aa64424e-31b6-4857-474d-1a7acae5dacf@redhat.com> (raw)
In-Reply-To: <BN8PR07MB6962B04D8A71EE688D080763C81D0@BN8PR07MB6962.namprd07.prod.outlook.com>
On 10/22/20 20:55, Sean wrote:
> Laszlo and others familiar with QEMU,
>
> I am trying to write automation that boots QEMU/OVMF and then runs EFI
> applications and those efi applications then use the UEFI shell apis to
> write back to the disk their state. I am seeing very inconsistent
> results with sometimes it working fine while other times the disk
> contents are invalid. If i run multiple tests it seems like the first
> two work while the 3rd seems to start failing but overall it seems random.
>
> Failing means:
> Disk contents corrupted but present.
> Disk contents wrong size (short).
> Files that show written in UEFI shell never show up on host.
>
> I am trying to determine if this is a known limitation with QEMU or a
> bug i need to track down in the unit test code.
>
> My setup:
>
> This is on a Windows 10 x64 host. I am running current 5.1 version of
> QEMU.
>
> My script creates a folder in the Windows NTFS file system. Copies the
> EFI executables and startup.nsh files to it. Then starts QEMU with the
> following additional parameter.
>
> -drive file=fat:rw:{VirtualDrive},format=raw,media=disk
This is the problem. Don't use the fat / vvfat block driver for anything
meaningful.
I don't even have to look at particulars; as "fat" ("vvfat") is known to
be a hack. In particular write operations should not be relied on
(either guest->host or host->guest directions). Don't expect this QEMU
feature to work as a "semihosting" (esp. "live semihosting") solution.
What's important to understand about "vvfat" is that it attempts to
*re-compose* a filesystem view from block-level operations.
*De-composing* file operations into block operations is an everyday
occurrence (that's what filesystem drivers do everywhere). But the write
direction of vvfat attempts to do the *inverse* -- it seeks to recognize
block operations and to synthesize file operations from them. If you get
lucky, it sometimes even works.
Instead, please use a regular virtual disk image in the QEMU
configuration. This disk image should not be accessed concurrently by
QEMU (= the guest) and other host-side utilities. In other words, first
format / populate the disk image on the host side, then launch QEMU.
Then in the guest UEFI shell, terminate the guest with the "reset -s"
command. Finally, once QEMU has exited, use host-side utilities to fetch
the results from the virtual disk image.
On the host side (on a Linux installation anyway), I tend to use the
"mtools" package (such as "mcopy" etc), for manipulating the contents of
disk image files. Or, more frequently, the "guestfish" program (which is
extremely capable).
https://libguestfs.org/guestfish.1.html
I don't know if equivalents exist on Windows.
Now, another option (on Linux anyway) is to loop-mount a "raw" virtual
disk image. This is not recommended, as it directly exposes the host
kernel's filesystem driver(s) to metadata produced by the guest. It
could trigger security issues in the host kernel.
(This is exactly what guestfish avoids, by running a separate Linux
guest -- called the "libguestfs appliance" -- on top of the virtual disk
image. The guestfish command interpreter on the host side exchanges
commands and data with the appliance over virtio-serial. If the metadata
on the disk image is malicious, it will break / exploit the *guest*
kernel in the appliance. The host-side component, the guestfish command
interpreter, only has to sanity-check the virtio-serial exchanges.)
... Earlier I've looked into "virtio-fs" support for OVMF:
https://virtio-fs.gitlab.io/
https://www.redhat.com/archives/virtio-fs/2019-August/msg00349.html
however, it's very complex, and the wire format (the opcodes) are
extremely low-level and Linux specific. To the point where they directly
mirror Linux VFS system calls, and (due to lack of documentation) I
don't even understand what a big bunch of the opcodes *do*.
Earlier I had given some thought to a mapping between
EFI_SIMPLE_FILE_SYSTEM_PROTOCOL / EFI_FILE_PROTOCOL and the virtio-fs
opcodes, but when I did that, it looked like a bad fit. Virtio-fs seems
to aim at serializing *Linux guest* filesystem operations as tightly as
possble for host-side processing, and that seemed like a big obstacle
for a *UEFI guest* mapping.
In particular, EFI_SIMPLE_FILE_SYSTEM_PROTOCOL / EFI_FILE_PROTOCOL don't
really expect data and/or meta-data to change "under their feet" (-> due
to asynchronous host-side modifications). For a while I was hopeful to
expose such changes via the EFI_MEDIA_CHANGED return value. But -- alas,
I forget the details -- it seemed to turn out that the virtio-fs
interfaces wouldn't really let the EFI_SIMPLE_FILE_SYSTEM_PROTOCOL
driver, to be provided by OVMF, even *detect* the situations when it
should return EFI_SIMPLE_FILE_SYSTEM_PROTOCOL.
So, the virtio-fs driver for OVMF has been postponed indefinitely, and
the best I can recommend at the moment is to use a regular virtual disk
image file. Remember that QEMU (= guest), and the other (host-side)
utilities for manipulating the disk image, should be strictly serialized
(they should mutually exclude each other).
Thanks
Laszlo
>
> VirtualDrive is the Windows file path of the said mentioned folder.
>
>
> If interested you should be able to reproduce the results by pulling my
> branch and/or you can review the above.
>
> You can see the operations here:
>
> PR: https://github.com/microsoft/mu_tiano_platforms/pull/1
>
> My Branch:
> https://github.com/spbrogan/mu_tiano_platforms/tree/personal/sebrogan/shellunittests
>
>
> Or if interested you can reproduce it by following steps defined here:
>
> https://github.com/spbrogan/mu_tiano_platforms/blob/personal/sebrogan/shellunittests/Platforms/QemuQ35Pkg/Docs/building.md
>
>
> and more details here
> https://github.com/spbrogan/mu_tiano_platforms/blob/personal/sebrogan/shellunittests/Platforms/QemuQ35Pkg/Plugins/QemuRunner/ReadMe.md
>
>
> After building qemu with the right parameters for your environment you
> can run <your stuart_build cmd> --flashonly MARK_STARTUP_NSH=TRUE
> RUN_UNIT_TESTS=TRUE
>
> For example in my environment it looks like
> stuart_build -c Platforms\QemuQ35Pkg\PlatformBuild.py
> TOOL_CHAIN_TAG=VS2019 --flashonly RUN_UNIT_TESTS=TRUE MAKE_STARTUP_NSH=TRUE
>
>
> Anyway if i recall correctly last year when we talked briefly about
> automation there was some concern that this would happen. Any
> information and/or ideas would be greatly appreciated.
>
> Thanks
> Sean
>
>
>
>
>
>
>
>
>
>
>
>
next prev parent reply other threads:[~2020-10-27 13:26 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-22 18:55 OVMF/QEMU shell based unit tests and writing to a virtual disk Sean
2020-10-27 13:26 ` Laszlo Ersek [this message]
2020-10-27 13:41 ` [edk2-devel] " Laszlo Ersek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aa64424e-31b6-4857-474d-1a7acae5dacf@redhat.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox