From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (NAM10-BN7-obe.outbound.protection.outlook.com [40.107.92.49]) by mx.groups.io with SMTP id smtpd.web09.18052.1604954053954142364 for ; Mon, 09 Nov 2020 12:34:14 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@amdcloud.onmicrosoft.com header.s=selector2-amdcloud-onmicrosoft-com header.b=JIVRdaaw; spf=none, err=SPF record not found (domain: amd.com, ip: 40.107.92.49, mailfrom: ashish.kalra@amd.com) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YjcVnY36eTXK5N0/bllmTzZFVgmPM5v2jWjHNQ+TidBji0s1K/lKgYNDpouOLir0FMIX+NRIsWXI5S+qrR+18/hISrWctWUH8qc2bP1oGaWQnJcvwIiW6l/yMXVQIsiPLN6HXqozSQI8la/OqpkKvBo4ALGpJthGjXSMqnrOMuyJgH2k+LeS6DXMQdVv01yAKxtYoi/nMWq8HRgiYhWCRtnHUuLL3bdWcOm4W3SVEQULsile9hPzLchHGN4TJ599pA1SeJo4cQtMxacgHa76bhccV97DAbWCDKY9G9/c4X7bjzZfxraHu+Lx00hsmof5pgP2D46TACnHdnpB3C5eXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=42hYOEKS3llhnndsijaUhxJOs1Zaf4NMH0mfUGQayFE=; b=b/bhdQa+npZr2saDWuQ6MR3Wq9wbJv2NtHsY7Yuv5eJrmYu7fTE19AP7/ZiUm3wrFV1jXbPkSOfhMD9PQeirujUOuEPYkGK/osECMe6PKUz2/CyUP7w9kJf6mybi4fKvPOutHAk8rm7RIAZLiK3ZUgm7NwbtHeGodYOmdzsBQrKeQCIC/bgkwC34gidVr+hZ3Yxlbogr7eMPCJ4uvUKVCgPR99H7+V+MlXh5crj2yO67KJEPobFqgLGLm7o1WuuXEe5HN+SeROidBdA82iQKFD+fMXYLQUmP8jOmYh6Z//6W+sJZTky0fxPXWSfODDYRq1T7zO1meWsOYu57NDq+RQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=42hYOEKS3llhnndsijaUhxJOs1Zaf4NMH0mfUGQayFE=; b=JIVRdaawA4vML+zwxsPf9EB8TGA1ntpBC6Vva+pGb+RB6R7opmQQmwkEbfQMtr1sLLCpICHjjKuroudIhB7c6DTH5JfeCySB0xgJ5xhKCx2FbZDpzdyMrkXwAyzGzOuQ414tTp0eE2c7w9yP5ohnYgagWvMahLnjqxc5s0rziyo= Received: from SN6PR12MB2767.namprd12.prod.outlook.com (2603:10b6:805:75::23) by SA0PR12MB4559.namprd12.prod.outlook.com (2603:10b6:806:9e::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.24; Mon, 9 Nov 2020 20:34:12 +0000 Received: from SN6PR12MB2767.namprd12.prod.outlook.com ([fe80::d8f2:fde4:5e1d:afec]) by SN6PR12MB2767.namprd12.prod.outlook.com ([fe80::d8f2:fde4:5e1d:afec%3]) with mapi id 15.20.3541.025; Mon, 9 Nov 2020 20:34:11 +0000 From: "Kalra, Ashish" To: Tobin Feldman-Fitzthum CC: "Dr. David Alan Gilbert" , Laszlo Ersek , "devel@edk2.groups.io" , "dovmurik@linux.vnet.ibm.com" , "Dov.Murik1@il.ibm.com" , "Singh, Brijesh" , "tobin@ibm.com" , "Kaplan, David" , "Grimm, Jon" , "Lendacky, Thomas" , "jejb@linux.ibm.com" , "frankeh@us.ibm.com" Subject: Re: [edk2-devel] RFC: Fast Migration for SEV and SEV-ES - blueprint and proof of concept Thread-Topic: [edk2-devel] RFC: Fast Migration for SEV and SEV-ES - blueprint and proof of concept Thread-Index: AQHWrWD7h60WXpVdb0WR/moTxeoYhKm2iSQAgAHMhoCAAwYtAIAAVnIAgAAIEQCABJiCgIAAAHZQ Date: Mon, 9 Nov 2020 20:34:11 +0000 Message-ID: References: <933a5d2b-a495-37b9-fe8b-243f9bae24d5@redhat.com> <61acbc7b318b2c099a106151116f25ea@linux.vnet.ibm.com> <20201106163848.GM3576@work-vm> <6c4d7b90a59d3df6895d8c0e35f7a2cd@linux.vnet.ibm.com> <20201106221704.GA23995@ashkalra_ubuntu_server> <830107e597cd63d69283094d4e36a10e@linux.vnet.ibm.com> In-Reply-To: <830107e597cd63d69283094d4e36a10e@linux.vnet.ibm.com> Accept-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: MSIP_Label_0d814d60-469d-470c-8cb0-58434e2bf457_Enabled=true; MSIP_Label_0d814d60-469d-470c-8cb0-58434e2bf457_SetDate=2020-11-09T20:34:02Z; MSIP_Label_0d814d60-469d-470c-8cb0-58434e2bf457_Method=Privileged; MSIP_Label_0d814d60-469d-470c-8cb0-58434e2bf457_Name=Public_0; MSIP_Label_0d814d60-469d-470c-8cb0-58434e2bf457_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d; MSIP_Label_0d814d60-469d-470c-8cb0-58434e2bf457_ActionId=6ef71730-8868-40c8-ae79-00008f353374; MSIP_Label_0d814d60-469d-470c-8cb0-58434e2bf457_ContentBits=1 msip_label_76546daa-41b6-470c-bb85-f6f40f044d7f_enabled: true msip_label_76546daa-41b6-470c-bb85-f6f40f044d7f_setdate: 2020-11-09T20:33:48Z msip_label_76546daa-41b6-470c-bb85-f6f40f044d7f_method: Standard msip_label_76546daa-41b6-470c-bb85-f6f40f044d7f_name: Internal Use Only - Unrestricted msip_label_76546daa-41b6-470c-bb85-f6f40f044d7f_siteid: 3dd8961f-e488-4e60-8e11-a82d994e183d msip_label_76546daa-41b6-470c-bb85-f6f40f044d7f_actionid: ad5a1d73-6e95-44c1-960f-0000dc6b5715 msip_label_76546daa-41b6-470c-bb85-f6f40f044d7f_contentbits: 0 msip_label_0d814d60-469d-470c-8cb0-58434e2bf457_enabled: true msip_label_0d814d60-469d-470c-8cb0-58434e2bf457_setdate: 2020-11-09T20:34:06Z msip_label_0d814d60-469d-470c-8cb0-58434e2bf457_method: Privileged msip_label_0d814d60-469d-470c-8cb0-58434e2bf457_name: Public_0 msip_label_0d814d60-469d-470c-8cb0-58434e2bf457_siteid: 3dd8961f-e488-4e60-8e11-a82d994e183d msip_label_0d814d60-469d-470c-8cb0-58434e2bf457_actionid: 1f03e15c-fa20-4b6c-8cad-0000767061ce msip_label_0d814d60-469d-470c-8cb0-58434e2bf457_contentbits: 0 authentication-results: linux.ibm.com; dkim=none (message not signed) header.d=none;linux.ibm.com; dmarc=none action=none header.from=amd.com; x-originating-ip: [136.49.12.8] x-ms-publictraffictype: Email x-ms-office365-filtering-ht: Tenant x-ms-office365-filtering-correlation-id: b503a718-82c8-49ae-6fd9-08d884eed110 x-ms-traffictypediagnostic: SA0PR12MB4559: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: onddAXeF9HQzLYn7xC94Lqv61vOn2hHZ+PG/LCBwfXI8O+Rljkmysgm4sQgR7Pcdi5jzfmFZiKiddoNc0+lVd1m9GTxcbptrJZ/J/grgSO8tqPE1tNGdZwp4bnvP2/9Nq83jOZYO9lQe4lJvCsFBGNkfmo+Ta5hj/wDyekCL8CUVYX8SJUGocNqXNao3ZRzbWmJy87VkI/HlVDTlZBeS512SVn0Lsm8b9I5UrBl7gqk/lBqDpumogi1B4HethturoVjGDIq7R1/SIFygNeVTanil3G9ctJZt11T/XbT+bAKLxJYJM9z1933dPL0ISeRSgz83OmQ5/mwZ2QRaK1Mpwkk+DV6Ksu5gjgmQsFom+d1Mh7dfOqKk+qNu691biRLXSa73bPp4n0cP17ScF0PMoNewXYGopk1m4k4XGL8SiafLS6NgBzxgbt3c8eZ4P83X5BO9LL2OXJkIfkr/mumzeA== x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2767.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(346002)(396003)(136003)(366004)(8676002)(33656002)(86362001)(4326008)(6916009)(83380400001)(53546011)(6506007)(966005)(2906002)(7696005)(316002)(55016002)(9686003)(30864003)(478600001)(186003)(26005)(66476007)(64756008)(66556008)(66446008)(76116006)(66946007)(8936002)(71200400001)(52536014)(45080400002)(54906003)(5660300002)(206463001)(579004);DIR:OUT;SFP:1101; x-ms-exchange-antispam-messagedata: xspqYEn6PrwLWE7jA5mnCBv00lXrPrWkGiPQmlsfVwFonMEV0mFqe0b9M9EP51PdPsi3f30Slw4csrqit8LCGrCB4zHAfgzB2iJBT9JbMbOffjENSiDtIeJ50XBW/7rQR0U0iE2NdSmfBaqKi4sNgFKcNxPxd9XGyKeVKycYsBcaWymuAk+4UA8mogVMO5F1+P6QWDpCNaWpU5uW9m0rU9WBIigzR9CiGDHkL7Jx2NGjf2zmg3Bjw7NUD9e32Ib1XJ68eDkQ6oh1HH5vsFluHfg+NurCT5Xa6zq60H1cRwCKuso2CdQNcJXd2d6VJ6kV1tz/VjJxAhsT/s0kHmS6Zz6qhdQWEIZ4kU/5VpBdO4mXkG96Dt87PJAD6zQiUxizXnjlnBM/01VGsjf00MpnCmXBmNCKoahhPrqcGTg6fMEn10CpaVcaYorgS527Y60MMtEz6uzS//KrWhFss6wnIVkp0/D7H7yJ+2cr2gM0GskCrultCtCYvZgq7rYnl6lgdqQ1pa61LO+jY9i5azawzTVUYKvi4me0IjZyyxgkQZ+B4T3Ut3pCxyOt0/eSNeD5B1MHllvOpgQ4GBf9Nx3Dbszqm/auHDb4NLtiWvlmf2WrEAfJA/jYD1lVlbp7JQlSKpyw9vjUBZPwVv8rCbgVmZZFCEf+MI+8rgk1XJVaANe9uVQLoktltg7cJKewszmvWYQWoFqJDzdGM9GczD4fpHMcYG8vJx/9PyLaMRdHb7K13Ujtn7DABPqvh2iU9RsYHIo9Z+jXW1nWGx6iy2miXOCq6Tc1G7AssYZq82krTJzcNd0deLOvecsq/dkxfhDFujvYASh5TbvpA/DW/VgkOoYtGPD1pDEYGPzokdXnpWU8EpoNf5KTLY3qmlfLqAJ1 MIME-Version: 1.0 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2767.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: b503a718-82c8-49ae-6fd9-08d884eed110 X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Nov 2020 20:34:11.6593 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: l+0uSjFgFaEIEz9bHMeh/9lWz2q0ViwbWYnQoWJED+utPYdHWxKh2DxeXIOegLldFZ7M83QHknPH4QP1AdDwKw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4559 Content-Language: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable [AMD Public Use] Hello Tobin, -----Original Message----- From: Tobin Feldman-Fitzthum =20 Sent: Monday, November 9, 2020 2:28 PM To: Kalra, Ashish Cc: Dr. David Alan Gilbert ; Laszlo Ersek ; devel@edk2.groups.io; dovmurik@linux.vnet.ibm.com; Dov.Murik1@il.i= bm.com; Singh, Brijesh ; tobin@ibm.com; Kaplan, Davi= d ; Grimm, Jon ; Lendacky, Thomas = ; jejb@linux.ibm.com; frankeh@us.ibm.com Subject: Re: [edk2-devel] RFC: Fast Migration for SEV and SEV-ES - blueprin= t and proof of concept On 2020-11-06 17:17, Ashish Kalra wrote: > Hello Tobin, >=20 > On Fri, Nov 06, 2020 at 04:48:12PM -0500, Tobin Feldman-Fitzthum wrote: >> On 2020-11-06 11:38, Dr. David Alan Gilbert wrote: >> > * Tobin Feldman-Fitzthum (tobin@linux.ibm.com) wrote: >> > > On 2020-11-03 09:59, Laszlo Ersek wrote: >> > > > Hi Tobin, >> > > > >> > > > (keeping full context -- I'm adding Dave) >> > > > >> > > > On 10/28/20 20:31, Tobin Feldman-Fitzthum wrote: >> > > > > Hello, >> > > > > >> > > > > Dov Murik. James Bottomley, Hubertus Franke, and I have been=20 >> > > > > working on a plan for fast live migration of SEV and SEV-ES=20 >> > > > > (and SEV-SNP when it's out and even hopefully Intel TDX) VMs.=20 >> > > > > We have developed an approach that we believe is feasible and=20 >> > > > > a demonstration that shows our solution to the most difficult=20 >> > > > > part of the problem. In short, we have implemented a UEFI=20 >> > > > > Application that can resume from a VM snapshot. We think this=20 >> > > > > is the crux of SEV-ES live migration. After describing the=20 >> > > > > context of our demo and how it works, we explain how it can=20 >> > > > > be extended to a full SEV-ES migration. Our goal is to show=20 >> > > > > that fast SEV and SEV-ES live migration can be implemented in=20 >> > > > > OVMF with minimal kernel changes. We provide a blueprint for=20 >> > > > > doing so. >> > > > > >> > > > > Typically the hypervisor facilitates live migration. AMD SEV=20 >> > > > > excludes the hypervisor from the trust domain of the guest.=20 >> > > > > When a hypervisor >> > > > > (HV) examines the memory of an SEV guest, it will find only a=20 >> > > > > ciphertext. If the HV moves the memory of an SEV guest, the=20 >> > > > > ciphertext will be invalidated. Furthermore, with SEV-ES the=20 >> > > > > hypervisor is largely unable to access guest CPU state. Thus,=20 >> > > > > fast migration of SEV VMs requires support from inside the=20 >> > > > > trust domain, i.e. the guest. >> > > > > >> > > > > One approach is to add support for SEV Migration to the Linux ke= rnel. >> > > > > This would allow the guest to encrypt/decrypt its own memory=20 >> > > > > with a transport key. This approach has met some resistance.=20 >> > > > > We propose a similar approach implemented not in Linux, but=20 >> > > > > in firmware, specifically OVMF. Since OVMF runs inside the=20 >> > > > > guest, it has access to the guest memory and CPU state. OVMF=20 >> > > > > should be able to perform the manipulations required for live=20 >> > > > > migration of SEV and SEV-ES guests. >> > > > > >> > > > > The biggest challenge of this approach involves migrating the=20 >> > > > > CPU state of an SEV-ES guest. In a normal (non-SEV migration)=20 >> > > > > the HV sets the CPU state of the target before the target=20 >> > > > > begins executing. In our approach, the HV starts the target=20 >> > > > > and OVMF must resume to whatever state the source was in. We=20 >> > > > > believe this to be the crux (or at least the most difficult=20 >> > > > > part) of live migration for SEV and we hope that by=20 >> > > > > demonstrating resume from EFI, we can show that our approach=20 >> > > > > is generally feasible. >> > > > > >> > > > > Our demo can be found at=20 >> > > > > > > > > > %2F%2Fgithub.com%2Fsecure-migration&data=3D04%7C01%7Cashish >> > > > > .kalra%40amd.com%7C5180f68f099546c3a49e08d884edf727%7C3dd8961 >> > > > > fe4884e608e11a82d994e183d%7C0%7C0%7C637405504892572249%7CUnkn >> > > > > own%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI >> > > > > 6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=3DdkF04%2FoQgl8rLYXXxF >> > > > > 2nQNwDr1VmfvMfZ8amC6QHZV4%3D&reserved=3D0>. The tooling=20 >> > > > > repository is the best starting point. It contains=20 >> > > > > documentation about the project and the scripts needed to run=20 >> > > > > the demo. There are two more repos associated with the=20 >> > > > > project. One is a modified edk2 tree that contains our=20 >> > > > > modified OVMF. The other is a modified qemu, that has a=20 >> > > > > couple of temporary changes needed for the demo. Our=20 >> > > > > demonstration is aimed only at resuming from a VM snapshot in=20 >> > > > > OVMF. We provide the source CPU state and source memory to=20 >> > > > > the destination using temporary plumbing that violates the SEV t= rust model. We explain the setup in more depth in README.md. We are showing= only that OVMF can resume from a VM snapshot. >> > > > > At the end we will describe our plan for transferring CPU=20 >> > > > > state and memory from source to guest. To be clear, the=20 >> > > > > temporary tooling used for this demo isn't built for=20 >> > > > > encrypted VMs, but below we explain how this demo applies to=20 >> > > > > and can be extended to encrypted VMs. >> > > > > >> > > > > We Implemented our resume code in a very similar fashion to=20 >> > > > > the recommended S3 resume code. When the HV sets the CPU=20 >> > > > > state of a guest, it can do so when the guest is not=20 >> > > > > executing. Setting the state from inside the guest is a=20 >> > > > > delicate operation. There is no way to atomically set all of=20 >> > > > > the CPU state from inside the guest. Instead, we must set=20 >> > > > > most registers individually and account for changes in=20 >> > > > > control flow that doing so might cause. We do this with a=20 >> > > > > three-phase trampoline. OVMF calls phase 1, which runs on the=20 >> > > > > OVMF map. Phase 1 sets up phase 2 and jumps to it. Phase 2=20 >> > > > > switches to an intermediate map that reconciles the OVMF map=20 >> > > > > and the source map. Phase 3 switches to the source map,=20 >> > > > > restores the registers, and returns into execution of the=20 >> > > > > source. We will go backwards through these phases in more depth. >> > > > > >> > > > > The last thing that resume to EFI does is return.=20 >> > > > > Specifically, we use IRETQ, which reads the values of RIP,=20 >> > > > > CS, RFLAGS, RSP, and SS from a temporary stack and restores=20 >> > > > > them atomically, thus returning to source execution. Prior to=20 >> > > > > returning, we must manually restore most other registers to=20 >> > > > > the values they had on the source. One particularly=20 >> > > > > significant register is CR3. When we return to Linux, CR3=20 >> > > > > must be set to the source CR3 or the first instruction=20 >> > > > > executed in Linux will cause a page fault. The code that we=20 >> > > > > use to restore the registers and return must be mapped in the=20 >> > > > > source page table or we would get a page fault executing the=20 >> > > > > instructions prior to returning into Linux. The value of >> > > > > CR3 is so significant, that it defines the three phases of=20 >> > > > > the trampoline. Phase 3 begins when CR3 is set to the source=20 >> > > > > CR3. After setting CR3, we set all the other registers and retur= n. >> > > > > >> > > > > Phase 2 mainly exists to setup phase 3. OVMF uses a 1-1=20 >> > > > > mapping, meaning that virtual addresses are the same as=20 >> > > > > physical addresses. The kernel page table uses an offset=20 >> > > > > mapping, meaning that virtual addresses differ from physical=20 >> > > > > addresses by a constant (for the most part). Crucially, this=20 >> > > > > means that the virtual address of the page that is executed=20 >> > > > > by phase 3 differs between the OVMF map and the source map.=20 >> > > > > If we are executing code mapped in OVMF and we change CR3 to=20 >> > > > > point to the source map, although the page may be mapped in=20 >> > > > > the source map, the virtual address will be different, and we=20 >> > > > > will face undefined behavior. To fix this, we construct=20 >> > > > > intermediate page tables that map the pages for phase >> > > > > 2 and 3 to the virtual address expected in OVMF and to the=20 >> > > > > virtual address expected in the source map. Thus, we can=20 >> > > > > switch CR3 from OVMF's map to the intermediate map and then=20 >> > > > > from the intermediate map to the source map. Phase 2 is much=20 >> > > > > shorter than phase 3. Phase 2 is mainly responsible for=20 >> > > > > switching to the intermediate map, flushing the TLB, and=20 >> > > > > jumping to phase 3. >> > > > > >> > > > > Fortunately phase 1 is even simpler than phase 2. Phase 1 has=20 >> > > > > two duties. First, since phase 2 and 3 operate without a=20 >> > > > > stack and can't access values defined in OVMF (such as the=20 >> > > > > addresses of the pages containing phase 2 and 3), phase 1=20 >> > > > > must pass these values to phase 2 by putting them in=20 >> > > > > registers. Second, phase 1 must start phase 2 by jumping to=20 >> > > > > it. >> > > > > >> > > > > Given that we can resume to a snapshot in OVMF, we should be=20 >> > > > > able to migrate an SEV guest as long as we can securely=20 >> > > > > communicate the VM snapshot from source to destination. For=20 >> > > > > our demo, we do this with a handful of QMP commands. More=20 >> > > > > sophisticated methods are required for a production implementati= on. >> > > > > >> > > > > When we refer to a snapshot, what we really mean is the=20 >> > > > > device state, memory, and CPU state of a guest. In live=20 >> > > > > migration this is transmitted dynamically as opposed to being=20 >> > > > > saved and restored. Device state is not protected by SEV and=20 >> > > > > can be handled entirely by the HV. Memory, on the other hand,=20 >> > > > > cannot be handled only by the HV. As mentioned previously,=20 >> > > > > memory needs to be encrypted with a transport key. A=20 >> > > > > Migration Handler on the source will coordinate with the HV=20 >> > > > > to encrypt pages and transmit them to the destination. The=20 >> > > > > destination HV will receive the pages over the network and=20 >> > > > > pass them to the Migration Handler in the target VM so they=20 >> > > > > can be decrypted. This transmission will occur continuously=20 >> > > > > until the memory of the source and target converges. >> > > > > >> > > > > Plain SEV does not protect the CPU state of the guest and=20 >> > > > > therefore does not require any special mechanism for=20 >> > > > > transmission of the CPU state. >> > > > > We >> > > > > plan to implement an end-to-end migration with plain SEV=20 >> > > > > first. In SEV-ES, the PSP (platform security processor)=20 >> > > > > encrypts CPU state on each VMExit. The encrypted state is=20 >> > > > > stored in memory. Normally this memory (known as the VMSA) is=20 >> > > > > not mapped into the guest, but we can add an entry to the=20 >> > > > > nested page tables that will expose the VMSA to the guest. >> > > > > This means that when the guest VMExits, the CPU state will be=20 >> > > > > saved to guest memory. With the CPU state in guest memory, it=20 >> > > > > can be transmitted to the target using the method described=20 >> > > > > above. >> > > > > >> > > > > In addition to the changes needed in OVMF to resume the VM,=20 >> > > > > the transmission of the VM from source to target will require=20 >> > > > > a new code path in the hypervisor. There will also need to be=20 >> > > > > a few minor changes to Linux (adding a mapping for our Phase=20 >> > > > > 3 pages). Despite all the moving pieces, we believe that this=20 >> > > > > is a feasible approach for supporting live migration for SEV and= SEV-ES. >> > > > > >> > > > > For the sake of brevity, we have left out a few issues,=20 >> > > > > including SMP support, generation of the intermediate=20 >> > > > > mappings, and more. We have included some notes about these issu= es in the COMPLICATIONS.md file. >> > > > > We >> > > > > also have an outline of an end-to-end implementation of live=20 >> > > > > migration for SEV-ES in END-TO-END.md. See README.md for info=20 >> > > > > on how to run the demo. While this is not a full migration,=20 >> > > > > we hope to show that fast live migration with SEV and SEV-ES=20 >> > > > > is possible without major kernel changes. >> > > > > >> > > > > -Tobin >> > > > >> > > > the one word that comes to my mind upon reading the above is,=20 >> > > > "overwhelming". >> > > > >> > > > (I have not been addressed directly, but: >> > > > >> > > > - the subject says "RFC", >> > > > >> > > > - and the documentation at >> > > > >> > > > https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F >> > > > %2Fgithub.com%2Fsecure-migration%2Fresume-from-edk2-tooling%23w >> > > > hat-changes-did-we-make&data=3D04%7C01%7Cashish.kalra%40amd.c >> > > > om%7C5180f68f099546c3a49e08d884edf727%7C3dd8961fe4884e608e11a82 >> > > > d994e183d%7C0%7C0%7C637405504892582241%7CUnknown%7CTWFpbGZsb3d8 >> > > > eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0 >> > > > %3D%7C1000&sdata=3DztsOgLs3hNcn90iOPRV5gfSly11o0X3kq7yMmYhmRe >> > > > E%3D&reserved=3D0 >> > > > >> > > > states that AmdSevPkg was created for convenience, and that the=20 >> > > > feature could be integrated into OVMF. (Paraphrased.) >> > > > >> > > > So I guess it's tolerable if I make a comment: ) >> > > > >> > > We've been looking forward to your perspective. >> > > >> > > > I've checked out the "mh-state-dev" branch of=20 >> > > > > > > > F%2Fgithub.com%2Fsecure-migration%2Fresume-from-efi-edk2.git&am >> > > > p;data=3D04%7C01%7Cashish.kalra%40amd.com%7C5180f68f099546c3a49e0 >> > > > 8d884edf727%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637405 >> > > > 504892582241%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIj >> > > > oiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=3DnL0QD >> > > > AEW3%2B4%2Fw4GJRtyoF0D12gRRiTno6tA%2BE3%2BjNhM%3D&reserved=3D >> > > > 0>. It has >> > > > 80 commits on top of edk2 master (base commit: d5339c04d7cd, >> > > > "UefiCpuPkg/MpInitLib: Add missing explicit PcdLib dependency",=20 >> > > > 2020-04-23). >> > > > >> > > > These commits were authored over the 6-7 months since April.=20 >> > > > It's obviously huge work. To me, most of these commits clearly=20 >> > > > aim at getting the demo / proof-of-concept functional, rather=20 >> > > > than guiding (more >> > > > precisely: hand-holding) reviewers through the construction of=20 >> > > > the feature. >> > > > >> > > > In my opinion, the series is not upstreamable in its current=20 >> > > > format (which is presently not much more readable than a=20 >> > > > single-commit code drop). Upstreaming is probably not your intent,= either, at this time. >> > > > >> > > > I agree that getting feedback ("buy-in") at this level of=20 >> > > > maturity is justified from your POV, before you invest more=20 >> > > > work into cleaning up / restructuring the series. >> > > > >> > > > My problem is that "hand-holding" is exactly what I'd need -- I=20 >> > > > cannot dedicate one or two weeks, as an indivisible block, to=20 >> > > > understanding your design. Nor can I approach the series=20 >> > > > patch-wise in its current format. Personally I would need the=20 >> > > > patch series to lead me through the whole design with baby=20 >> > > > steps ("ELI5"), meaning small code changes and detailed commit=20 >> > > > messages. I'd *also* need the more comprehensive guide-like docume= ntation, as background material. >> > > > >> > > > Furthermore, I don't have an environment where I can test this=20 >> > > > proof-of-concept (and provide you with further incentive for=20 >> > > > cleaning up the series, by reporting success). >> > > > >> > > > So I hope others can spend the time discussing the design with=20 >> > > > you, and testing / repeating the demo. For me to review the=20 >> > > > patches, the patches should condense and replay your thinking=20 >> > > > process from the last 7 months, in as small as possible logical=20 >> > > > steps. (On the list.) >> > > > >> > > I completely understand your position. This PoC has a lot of new=20 >> > > ideas in it and you're right that our main priority was not to=20 >> > > hand-hold/guide reviewers through the code. >> > > >> > > One thing that is worth emphasizing is that the pieces we are=20 >> > > showcasing here are not the immediate priority when it comes to=20 >> > > upstreaming. Specifically, we looked into the trampoline to make=20 >> > > sure it was possible to migrate CPU state via firmware. >> > > While we need this for SEV-ES and our goal is to support SEV-ES,=20 >> > > it is not the first step. We are currently working on a PoC for a=20 >> > > full end-to-end migration with SEV (non-ES), which may be a=20 >> > > better place for us to begin a serious discussion about getting=20 >> > > things upstream. We will focus more on making these patches=20 >> > > accessible to the upstream community. >> > >> > With my migration maintainer hat on, I'd like to understand a bit=20 >> > more about these different approaches; they could be quite=20 >> > invasive, so I'd like to make sure we're not doing one and throwing=20 >> > it away - it would be great if you could explain your non-ES=20 >> > approach; you don't need to have POC code to explain it. >> > >> Our non-ES approach is a subset of our ES approach. For ES, the=20 >> Migration Handler in the guest needs to help out with memory and CPU=20 >> state. For plain SEV, the HV can set the CPU state, but we still need=20 >> a way to transfer the memory. The current POC only deals with the CPU=20 >> state. >>=20 >> We're still working out some of the details in QEMU, but the basic=20 >> idea of transferring memory is that each time the HV needs to send a=20 >> page to the target, it will ask the Migration Handler in the guest=20 >> for a version of the page that is encrypted with a transport key. >> Since the MH is inside the guest, it can read from any address in=20 >> guest memory. The Migration Handlers on the source and the target=20 >> will share a key. Once the source encrypts the requested page with=20 >> the transport key, it can safely hand it off to the HV. Once the page=20 >> reaches the target, the target HV will pass the page into the=20 >> Migration Handler, which will decrypt using the transport key and=20 >> move the page to the appropriate address. >>=20 >> A few things to note: >>=20 >> - The Migration Handler on the source needs to be running in the >> guest alongside the VM. On the target, the MH needs to startup >> before we can receive any pages. In both cases we are thinking >> that an additional vCPU can be started for the MH to run on. >> This could be spawned dynamically or live for the duration of >> the guest. >>=20 >> - We need to make sure that the Migration Handler on the target >> does not overwrite itself when it receives pages from the >> source. Since we run the same firmware on the source and >> target, and since the MH is runtime code, the memory >> footprint of the MH should match on the source and the >> target. We will need to make sure there are no weird >> relocations. >>=20 >> - There are some complexities arising from the fact that not >> every page in an SEV VM is encrypted. We are looking into >> the best way to handle encrypted vs. shared pages. >>=20 >=20 > Raising this question here as part of this discussion ... are you=20 > thinking of adding the page encryption bitmap (as we do for the slow=20 > migration patches) here to figure out if the guest pages are encrypted=20 > or not ? >=20 > We are using the bitmap for the first iteration of our end-to-end POC. Ok. > The page encryption status will need notifications from the guest=20 > kernel and OVMF. >=20 > Additionally, is the page encrpytion bitmap support going to be added=20 > as a hypercall interface to the guest, which also means that the guest=20 > kernel needs to be modified ? > Although the bitmap is handy, we would like to avoid the patches you are = alluding to. We are currently looking into how we can eliminate the bitmap. Please note, the page encryption bitmap is also required for SEV guest page= migration and SEV guest debug support, therefore it might be useful for having these patches available. If you want us to push Brijesh's and my patches for the page encryption bit= map separately for the kernel then let us know. Thanks, Ashish >=20 >> Hopefully those notes don't confound my earlier explanation too much.=20 >> I think that's most of the picture for non-ES migration. >> Let me know if you have any questions. ES migration would use the=20 >> same approach for transferring memory. >>=20 >> -Tobin >>=20 >> > Dave >> > >> > > In the meantime, perhaps there is something we can do to help=20 >> > > make our current work more clear. We could potentially explain=20 >> > > things on a call or create some additional documentation. While=20 >> > > our goal is not to shove this version of the trampoline upstream,=20 >> > > it is significant to our plan as a whole and we want to help=20 >> > > people understand it. >> > > >> > > -Tobin >> > > >> > > > I really don't want to be the bottleneck here, which is why I=20 >> > > > would support introducing this feature as a separate top-level=20 >> > > > package (AmdSevPkg). >> > > > >> > > > Thanks >> > > > Laszlo >> > >