From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mx.groups.io with SMTP id smtpd.web09.2735.1614892703136472685 for ; Thu, 04 Mar 2021 13:18:23 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=LtToScPJ; spf=pass (domain: redhat.com, ip: 216.205.24.124, mailfrom: lersek@redhat.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614892702; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=84lIbNaNjJq8osG2wCNKu6l3EuYChYw56BZVVxW3Ok0=; b=LtToScPJjpGFbgqbetxIDeJjoeEs27f3Krla82eTUdOGgNOpRNEs48CZZlDZlHT0QSxy5+ 3Z0QmjmwA4pOGu+vDMWF69vmWoedxXfHy9E+sCcLGZCjdmQ7Bi2qOmse7Eb/BNjTmMaecb y+MmuI0S0AKs+2HNFPV+bRQ+5Ekj4go= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-491-73xrbg6ON068hnX8HlUvuQ-1; Thu, 04 Mar 2021 16:18:18 -0500 X-MC-Unique: 73xrbg6ON068hnX8HlUvuQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E38C31005D6B; Thu, 4 Mar 2021 21:18:16 +0000 (UTC) Received: from lacos-laptop-7.usersys.redhat.com (ovpn-112-76.ams2.redhat.com [10.36.112.76]) by smtp.corp.redhat.com (Postfix) with ESMTP id 799B2694C9; Thu, 4 Mar 2021 21:18:15 +0000 (UTC) Subject: Re: [edk2-devel] [RFC PATCH 00/14] Firmware Support for Fast Live Migration for AMD SEV From: "Laszlo Ersek" To: devel@edk2.groups.io, pbonzini@redhat.com, Tobin Feldman-Fitzthum , Dov Murik References: <20210302204839.82042-1-tobin@linux.ibm.com> <9205.1614849670474335263@groups.io> <7775b8d9-ed8d-eb4f-0c1a-3552996cef90@redhat.com> Message-ID: Date: Thu, 4 Mar 2021 22:18:14 +0100 MIME-Version: 1.0 In-Reply-To: <7775b8d9-ed8d-eb4f-0c1a-3552996cef90@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=lersek@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit On 03/04/21 21:45, Laszlo Ersek wrote: > On 03/04/21 10:21, Paolo Bonzini wrote: >> Hi Tobin, >> >> as mentioned in the reply to the QEMU patches posted by Tobin, I >> think the firmware helper approach is very good, but there are some >> disadvantages in the idea of auxiliary vCPUs. These are especially >> true in the VMM, where it's much nicer to have a separate VM that >> goes through a specialized run loop; however, even in the firmware >> level there are some complications (as you pointed out) in letting >> MpService workers run after ExitBootServices. >> >> My idea would be that the firmware would start the VM as usual using >> the same launch data; then, the firmware would detect it was running >> as a migration helper VM during the SEC or PEI phases (for example >> via the GHCB or some other unencrypted communication area), and >> divert execution to the migration helper instead of proceeding to the >> next boot phase. This would be somewhat similar in spirit to how edk2 >> performs S3 resume, if my memory serves correctly. > > Very cool. You'd basically warm-reboot the virtual machine into a new > boot mode (cf. BOOT_WITH_FULL_CONFIGURATION vs. BOOT_ON_S3_RESUME in > OvmfPkg/PlatformPei). > > To me that's much more attractive than a "background job". > > The S3 parallel is great. What I'm missing is: > > - Is it possible to warm-reboot an SEV VM? (I vaguely recall that it's > not possible for SEV-ES at least.) Because, that's how we'd transfer > control to the early parts of the firmware again, IIUC your idea, while > preserving the memory contents. > > - Who would initiate this process? S3 suspend is guest-initiated. (Not > that we couldn't use the guest agent, if needed.) > > (In case the idea is really about a separate VM, and not about rebooting > the already running VM, then I don't understand -- how would a separate > VM access the guest RAM that needs to be migrated?) Sorry -- I've just caught up with the QEMU thread. Your message there: https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg01220.html says: Patches were posted recently to the KVM mailing list to create secondary VMs sharing the encryption context (ASID) with a primary VM I did think of VMs sharing memory, but the goal of SEV seemed to be to prevent exactly that, so I didn't think that was possible. I stand corrected, and yes, this way I understand -- and welcome -- a completely separate VM snooping the migration subject VM's memory. My question would be then whether the migration helper VM would run on its own memory, and just read out the other VM's memory -- or the MH VM would run somewhere inside the original VM's memory (which sounds a lot riskier). But your message explains that too: The main advantage would be that the migration VM would not have to share the address space with the primary VM This sounds ideal; it should allow for a completely independent firmware platform -- we wouldn't even have to call it "OVMF", and it might not even have to contain the DXE Core and later-phase components. (Of course if it's more convenient to keep the stuff in OVMF, that works too.) (For some unsolicited personal information, now I feel less bad about this idea never occurring to me -- I never knew about the KVM patch set that would enable encryption context sharing. (TBH I thought that was prevented, by design, in the SEV hardware...)) A workflow request to Tobin and Dov -- when posting closely interfacing QEMU and edk2 series, it's best to cross-post both series to both lists, and to CC everybody on everything. Feel free to use subject prefixes like [qemu PATCH] and [edk2 PATCH] for clarity. It's been difficult for me to follow both discussions (it doesn't help that I've been CC'd on neither). Thanks! Laszlo > > NB in the X64 PEI phase of OVMF, only the first 4GB of RAM is mapped, so > the migration handler would have to build its own page table under this > approach too. > > Thanks! > Laszlo >