public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: Laszlo Ersek <lersek@redhat.com>
To: Brijesh Singh <brijesh.singh@amd.com>,
	edk2-devel-01 <edk2-devel@lists.01.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Jordan Justen <jordan.l.justen@intel.com>
Subject: Re: [PATCH 00/20] OvmfPkg: SEV: decrypt the initial SMRAM save state map for SMBASE relocation
Date: Mon, 5 Mar 2018 22:06:47 +0100	[thread overview]
Message-ID: <ed6a9560-b670-30b0-0908-e76656116780@redhat.com> (raw)
In-Reply-To: <43e0c837-3f1d-691b-f701-10c2777b721a@amd.com>

On 03/05/18 15:44, Brijesh Singh wrote:
> On 03/05/2018 08:00 AM, Laszlo Ersek wrote:

>> QEMU exits with the following error for me:
>>
>> 2018-03-05T13:40:12.478835Z qemu-system-x86_64: sev_ram_block_added:
>> failed to register region (0x7f3df3e00000+0x200000000)
>> 2018-03-05T13:40:12.489183Z qemu-system-x86_64: sev_ram_block_added:
>> failed to register region (0x7f3ffaa00000+0x37c000)
>> 2018-03-05T13:40:12.497580Z qemu-system-x86_64: sev_ram_block_added:
>> failed to register region (0x7f3ffa800000+0x20000)
>> 2018-03-05T13:40:12.504485Z qemu-system-x86_64:
>> sev_launch_update_data: LAUNCH_UPDATE ret=-12 fw_error=0 ''
>> 2018-03-05T13:40:12.504493Z qemu-system-x86_64: failed to encrypt
>> pflash rom
>>
>> Here's my full QEMU command line (started by libvirt) -- this command
>> line does not restrict pflash access to guest code that runs in SMM,
>> and correspondingly, the OVMF build lacks SMM_REQUIRE:
>>
> 
> Are you launching guest as a normal users or root ? If you are launching
> guest as normal user then please make sure you have increased the 'max
> locked memory' limit. The register region function will try to pin the
> memory, while doing so we check the limit and if requested size is
> greater than ulimit then we fail.
> 
> 
> # ulimit -a
> core file size          (blocks, -c) unlimited
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 966418
> max locked memory       (kbytes, -l) 10240000
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 966418
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited

Good catch! Libvirtd starts the QEMU process with UID=qemu, but the
restriction doesn't come from there.

Instead, it seems like the systemd default for "max locked memory" is
64KB on RHEL-7 anyway. I raised it by setting

  DefaultLimitMEMLOCK=infinity

in "/etc/systemd/system.conf".

(The documentation is at:
- <https://www.freedesktop.org/software/systemd/man/systemd.exec.html>,
-
<https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html>.)

Following your other email, I've now also added
"iommu_platform=on,ats=on" to virtio-net-pci, not just virtio-scsi-pci.

This got a lot farther: the TianoCore splash screen was dispalyed, but
then it got stuck.

Looking more at the libvirt-generated command line, I figured maybe
"vhost" should be disabled for virtio-net (so that the device
implementation would run from QEMU userspace, not in the host kernel).
Thus, ultimately I added

    <interface type='network'>
      <driver name='qemu' iommu='on' ats='on'/>
              ^^^^^^^^^^^
    </interface>

documented at
<https://libvirt.org/formatdomain.html#elementsDriverBackendOptions>.

With these settings, the guest boots & works fine for me! I tested the
SEV guest with 4 VCPUs, both with and without SMM. (I used the same
kernel in the guest as on the host -- you wrote CONFIG_AMD_MEM_ENCRYPT
for the guest, and the host requirements imply that.)

I'm attaching the full domain XML for reference.

Thanks!
Laszlo


  parent reply	other threads:[~2018-03-05 21:00 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-02  0:03 [PATCH 00/20] OvmfPkg: SEV: decrypt the initial SMRAM save state map for SMBASE relocation Laszlo Ersek
2018-03-02  0:03 ` [PATCH 01/20] OvmfPkg/MemEncryptSevLib: rewrap to 79 characters width Laszlo Ersek
2018-03-02  0:33   ` Kinney, Michael D
2018-03-02 11:25     ` Laszlo Ersek
2018-03-02  0:03 ` [PATCH 02/20] OvmfPkg/MemEncryptSevLib: clean up MemEncryptSevIsEnabled() decl Laszlo Ersek
2018-03-02  0:03 ` [PATCH 03/20] OvmfPkg/MemEncryptSevLib: clean up MemEncryptSevClearPageEncMask() decl Laszlo Ersek
2018-03-02  0:03 ` [PATCH 04/20] OvmfPkg/MemEncryptSevLib: clean up MemEncryptSevSetPageEncMask() decl Laszlo Ersek
2018-03-02  0:03 ` [PATCH 05/20] OvmfPkg/MemEncryptSevLib: clean up SetMemoryEncDec() comment block Laszlo Ersek
2018-03-02  0:03 ` [PATCH 06/20] OvmfPkg/MemEncryptSevLib: clean up InternalMemEncryptSevSetMemoryDecrypted() decl Laszlo Ersek
2018-03-02  0:03 ` [PATCH 07/20] OvmfPkg/MemEncryptSevLib: clean up InternalMemEncryptSevSetMemoryEncrypted() decl Laszlo Ersek
2018-03-02  0:03 ` [PATCH 08/20] OvmfPkg/MemEncryptSevLib: sort #includes, and entries in INF file sections Laszlo Ersek
2018-03-02  0:03 ` [PATCH 09/20] OvmfPkg/PlatformPei: sort #includes in "AmdSev.c" Laszlo Ersek
2018-03-02  0:03 ` [PATCH 10/20] OvmfPkg/SmmCpuFeaturesLib: rewrap to 79 columns Laszlo Ersek
2018-03-02  0:03 ` [PATCH 11/20] OvmfPkg/SmmCpuFeaturesLib: upper-case the "static" keyword Laszlo Ersek
2018-03-02  0:04 ` [PATCH 12/20] OvmfPkg/SmmCpuFeaturesLib: sort #includes, and entries in INF file sections Laszlo Ersek
2018-03-02  0:04 ` [PATCH 13/20] OvmfPkg/SmmCpuFeaturesLib: remove unneeded #includes and LibraryClasses Laszlo Ersek
2018-03-02  0:04 ` [PATCH 14/20] OvmfPkg/AmdSevDxe: rewrap to 79 characters width Laszlo Ersek
2018-03-02  0:04 ` [PATCH 15/20] OvmfPkg/AmdSevDxe: sort #includes, and entries in INF file sections Laszlo Ersek
2018-03-02  0:04 ` [PATCH 16/20] OvmfPkg/AmdSevDxe: refresh #includes and LibraryClasses Laszlo Ersek
2018-03-02  0:04 ` [PATCH 17/20] OvmfPkg/MemEncryptSevLib: find pages of initial SMRAM save state map Laszlo Ersek
2018-03-02  0:04 ` [PATCH 18/20] OvmfPkg/PlatformPei: SEV: allocate " Laszlo Ersek
2018-03-02  0:04 ` [PATCH 19/20] OvmfPkg/SmmCpuFeaturesLib: SEV: encrypt+free pages of init. " Laszlo Ersek
2018-03-02  0:04 ` [PATCH 20/20] OvmfPkg/AmdSevDxe: decrypt the pages of the initial SMRAM " Laszlo Ersek
2018-03-02  1:16 ` [PATCH 00/20] OvmfPkg: SEV: decrypt the initial SMRAM save state map for SMBASE relocation Brijesh Singh
2018-03-02 11:53   ` Laszlo Ersek
2018-03-02 13:17     ` Brijesh Singh
2018-03-05  9:55       ` Laszlo Ersek
2018-03-05 14:00       ` Laszlo Ersek
2018-03-05 14:44         ` Brijesh Singh
2018-03-05 14:47           ` Brijesh Singh
2018-03-05 21:06           ` Laszlo Ersek [this message]
2018-03-02 15:21 ` Brijesh Singh
2018-03-06 12:59   ` Laszlo Ersek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ed6a9560-b670-30b0-0908-e76656116780@redhat.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox