From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by mx.groups.io with SMTP id smtpd.web08.23101.1622985679023628380 for ; Sun, 06 Jun 2021 06:21:19 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@ibm.com header.s=pp1 header.b=qQad/5Ji; spf=pass (domain: linux.ibm.com, ip: 148.163.156.1, mailfrom: dovmurik@linux.ibm.com) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 156D3C82195778; Sun, 6 Jun 2021 09:21:17 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=pp1; bh=pb+n8NLVPUOlrr/TIQVF+/m50mDrE/RzhoBeTk8rBGU=; b=qQad/5JiODHZ8FyhATISbPagxpI77JkdjfQfcsjlGOj5oVEOQ/MAcz8xaLnwYb0ZKfPB Vw8HySdc1WoWCRG9B2fXAJ2Gz5RfxpDWrEYA4UzSN3XMnO8xS0efQ2+rzEuoutqi7mwr EsO//W4PM7UVyk+rnDphKWOMXQNjLuMcuajyiJz9HZDVkCz0HoBn3S6blSWnj7Ja8eha F33uNGg4QqJ/ROcz8R8Ctj6LFcHznNqMBtwKu2xTdAx7tm27/1pbPVaqSXsuHB4dYylW vuIW1g0fs3OMuLua1lcJhbTQk4oi64QSp1tn5D2OTGf1CD4iH2d0rMS5bal1ofTctT22 ww== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 390xq8rdb8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 06 Jun 2021 09:21:17 -0400 Received: from m0098404.ppops.net (m0098404.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 156DIdft092304; Sun, 6 Jun 2021 09:21:16 -0400 Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com [149.81.74.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 390xq8rdac-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 06 Jun 2021 09:21:16 -0400 Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1]) by ppma05fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 156DJjtV024173; Sun, 6 Jun 2021 13:21:14 GMT Received: from b06avi18626390.portsmouth.uk.ibm.com (b06avi18626390.portsmouth.uk.ibm.com [9.149.26.192]) by ppma05fra.de.ibm.com with ESMTP id 3900w88841-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 06 Jun 2021 13:21:14 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06avi18626390.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 156DKTQr31588836 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 6 Jun 2021 13:20:29 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id EC75642042; Sun, 6 Jun 2021 13:21:10 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5176A4203F; Sun, 6 Jun 2021 13:21:06 +0000 (GMT) Received: from [9.65.213.35] (unknown [9.65.213.35]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Sun, 6 Jun 2021 13:21:06 +0000 (GMT) Subject: Re: [edk2-devel] [PATCH v1 0/8] Measured SEV boot with kernel/initrd/cmdline To: Laszlo Ersek , devel@edk2.groups.io, Ard Biesheuvel Cc: Tobin Feldman-Fitzthum , Tobin Feldman-Fitzthum , Jim Cadden , James Bottomley , Hubertus Franke , Jordan Justen , Ashish Kalra , Brijesh Singh , Erdem Aktas , Jiewen Yao , Min Xu , Tom Lendacky References: <20210525053116.1533673-1-dovmurik@linux.ibm.com> <5d8c598e-31de-7973-df51-e913bba54587@redhat.com> From: "Dov Murik" Message-ID: <8ee75de9-9a62-c62b-fd98-810b5dae63b3@linux.ibm.com> Date: Sun, 6 Jun 2021 16:21:04 +0300 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: B_P66MdE1xMS_dUNEu9at5BSsNcR8yrx X-Proofpoint-GUID: 7eckrWc_3vXsqlWgOJoK68iDk7PvSdp3 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.761 definitions=2021-06-06_07:2021-06-04,2021-06-06 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 adultscore=0 bulkscore=0 mlxlogscore=999 malwarescore=0 mlxscore=0 suspectscore=0 spamscore=0 phishscore=0 clxscore=1015 impostorscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106060109 Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit On 04/06/2021 14:26, Laszlo Ersek wrote: > On 06/04/21 12:30, Dov Murik wrote: > >> So I argue to keep the existing approach with two separate areas: >> existing one for injected secrets, and new one for a table of approved >> hashes (filled by QEMU and updated as initial encrypted measured guest >> memory). > > OK. > >> If the issue is MEMFD space, > > Yes, that's it. > >> maybe we can do something like: use the >> existing secrets page (4KB) for two uses: first 3KB for secrets, and >> last 1KB for hashes. If this is not enough, the hashes are even >> smaller than 1KB; and we can even publish only one hash - the hash of >> all 3 hashes (need to think about edge cases when there's no >> cmdline/initrd). But all these "solutions" feel a bit hacky for me and >> might complicate the code. > > All these PCDs come in pairs -- base and size. (IIRC.) If there's no > architectural requirement to keep these two kinds of info in different > pages (such as different page protections or whatever), then packing > them into a single page is something I'd like. The above 3K+1K > subdivision sounds OK to me. > I'll go with 3KB secrets + 1KB hashes. >> >> I don't understand your suggestion: "I'd *really* like us to extend >> one of the existent structures. If necessary, introduce a new GUID, >> for a table that contains both previously injected data, and the new >> data."; does this mean to use a single MEMFD page for the injected >> secrets and the hashes? > > Yes, it's the same (say, 3K+1K) idea, just expressed differently. In one > case, you have two GUIDed structs in the (plaintext, not compressed) > reset vector in the pflash, and the base+size structures associated wth > those two separate GUIDs happen to identify distinct ranges of the same > MEMFD page. In the other case, you have just one GUIDed structure (with > base+size contents), and the page pointed-to by this base+size pair is > subdivided by *internal* structuring -- such as internal GUIDs and so > on. Whichever is simpler to implement in both QEMU and edk2; I just want > to avoid wasing a full page for three hashes. > I'll go with the two GUIDed structures in the reset vector (which will point to distinct parts of a single 4KB page). That actually means shortening the existing secrets MEMFD area from 4KB to 3KB. Is that OK? >> >> Also, in general, I don't really understand the implications of >> running out of MEMFD place; > > Here's one implication of enlarging MEMFD. It pushes BS Code, BS Data, > Loader Code, Loader Data, perhaps some AcpiNVS and Reserved memory > allocations to higher addresses. Then when the kernel is loaded, its > load address may be higher too. I'm not worried about wasted guest > memory, but abut various silent assumptions as to where the kernel > "should be". For example, after one round of enlarging DXEFV, the > "crash" utility stopped opening guest memory dumps, because it couldn't > find a kernel signature in the (low) address range that it used to scan. > The fix wasn't too difficult (the range to scan could be specified on > the "crash" commadn line, and then my colleague Dave Anderson just > modified "crash"), but it was a *surprise*. I don't like those. > >> maybe you have other ideas around this (for example, >> can we make MEMFD bigger only for AmdSevX64 platform?). > > Yes, experimenting with a larger MEMFD in just the AmdSevX64 platform is > fine. > But now I understand that failures can appear way later in userspace (the crash utility), so just testing that a modern AMD VM boots fine is not enough to get confidence here. > NB reordering various PCDs between each other, so that their relative > relationships (orders) change, is a *lot* more risky than just enlarging > existing areas. The code in OVMF tends not to rely on actual bases and > sizes, but it may very well rely on a particular BasePCD + SizePCD sum > not exceeding another particular BasePCD. > Thanks for pointing this out. I'll avoid reordering. >> >> >>> - Modifying the QemuFwCfgLib class for this purpose is inappropriate. >>> Even if we do our own home-brewed verifier, none of it must go into >>> QemuFwCfgLib class. QemuFwCfgLib is for transport. >>> >> >> OK, we'll take the verifier out (as you suggested below - to a >> BlobVerifierLib with two implementations). >> >> >>> [Ard, please see this one question:] >>> >>> - A major complication for hashing all three of: kernel, initrd, >>> cmdline, is that the *fetching* of this triplet is split between two >>> places. (Well, it is split between *three* places in fact, but I'm >>> going to ignore LinuxInitrdDynamicShellCommand for now, because the >>> AmdSevX64 platform sets BUILD_SHELL to FALSE for production.) >>> >>> The kernel and the initrd are fetched in QemuKernelLoaderFsDxe, but >>> the command line is fetched in (both) QemuLoadImageLib instances. >>> This requires that all these modules be littered with hashing as >>> well, which I find *really bad*. Even if we factor out the actual >>> logic, I strongly dislike having *just hooks* for hashing in multiple >>> modules. >>> >>> Now, please refer to efc52d67e157 ("OvmfPkg/QemuKernelLoaderFsDxe: >>> don't expose kernel command line", 2020-03-05). If we first >>> >>> (a) reverted that commit, and >>> >>> (b) modified *both* QemuLoadImageLib instances, to load the kernel >>> command line from the *synthetic filesystem* (rather than directly >>> from fw_cfg), >>> >>> then we could centralize the hashing to just QemuKernelLoaderFsDxe. >>> >>> Ard -- what's your thought on this? >>> >> >> I understand there's agreement here, and that both this suggested >> change (use the synthetic filesystem) and my patch series (add hash >> verification) touch the same code areas. How do you envision this >> process in the mailing list? Seperate patch serieses with dependency? >> One long patch series with both changes? What goes first? > > Good point. I do have a kind of patch order laid out in my mind, but I > didn't think of whether we should have the patches in one patch series, > or in two "waves". > > OK, let's go with two patch sets. > > In the first set, we should just focus on the above steps (a) and (b). > Step (a) shouldn't be too hard. In step (b), you'd modify both > QemuLoadImageLib instances (two separate patches), replacing the > QemuFwCfgLib APIs for fetching the cmdline with > EFI_SIMPLE_FILE_SYSTEM_PROTOCOL and EFI_FILE_PROTOCOL APIs. > > Speaking from memory, the synthetic filesystem has a unique device path, > so the first step would be calling gBS->LocateDevicePath(), for finding > SimpleFs on the unique device path. Once you have the SimpleFs > interface, you can call OpenVolume, then open the "cmdline" file using > the EFI_FILE_PROTOCOL output by OpenVolume. > > Once we merge this series (basically just three patches), there is no > QemuFwCfgLib dependency left in either QemuLoadImageLib instance, I > reckon. Then you can post the second wave, in which: > > - a new "firmware config verifier" library class is introduced, > > - two library instances for that class are introduced (null, and the > real thing), > > - the AmdSevX64.dsc platform resolves the new lib class to the "real" > (hashing) instance, > > - all other platform DSCs using QemuKernelLoaderFsDxe resolve the new > lib class to the null instance, > > - QemuKernelLoaderFsDxe is extended with a dependency on the new class, > calling the proper APIs to (a) initialize the verifier, and (b) verify > every fw_cfg blob that is about to be exposed as a synthetic file. > > Then QemuLoadImageLib needs no changes, as it will not depend on fw_cfg, > and every synthetic file it may want to access will have been verified > by QemuKernelLoaderFsDxe already, according to the verifier lib instance > that's used in the respective platform DSC file. > > I would recommend only posting the first patch set initially. It has a > very well defined goal (--> hide the fw_cfg dependency in both > QemuLoadImageLib instances behind the synthetic filesystem); we can > validate / review that regardless of the ultimate crypto / security > goal. Using the SimpleFs / FILE protocol APIs is not trivial IMO, so > it's possible that just the first wave will require a v2. > OK, I'll try to follow this plan. >> >> >>> >>> And then, we could eliminate the dynamic callback registration, plus >>> the separate SevFwCfgVerifier, SevHashFinderLib, and >>> SevQemuLoadImageLib stuff. >>> >>> We'd only need one new lib class, with *statically linked* hooks for >>> QemuKernelLoaderFsDxe, and two instances of this new class, a Null >>> one, and an actual (SEV hash verifier) one. The latter instance would >>> locate the hash values, calculate the fresh hashes, and perform the >>> comparisons. Only the AmdSevX64 platform would use the non-Null >>> instance of this library class. >> >> OK, I'll refactor to static linking with two BlobVerifierLib >> imlementations. >> >> >>> >>> (NB QemuKernelLoaderFsDxe is used by some ArmVirtPkg platforms, so >>> resolutions to the Null instance would be required there too.) >> >> I'll need to learn how to build edk2 for Arm to test this. Thanks for >> the heads-up. > > With regard to QemuKernelLoaderFsDxe specifically: > > build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtQemu.dsc -a AARCH64 > build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtQemu.dsc -a ARM > build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtQemuKernel.dsc -a AARCH64 > build -b NOOPT -t GCC5 -p ArmVirtPkg/ArmVirtQemuKernel.dsc -a ARM > > If you work on an x86_64 machine, you'll need cross-gcc and > cross-binutils for this. I have the following packages installed on my > laptop: > > binutils-aarch64-linux-gnu-2.31.1-3.el7.x86_64 > binutils-arm-linux-gnu-2.31.1-3.el7.x86_64 > cross-binutils-common-2.31.1-3.el7.noarch > > cross-gcc-common-9.2.1-3.el7.1.noarch > gcc-aarch64-linux-gnu-9.2.1-3.el7.1.x86_64 > gcc-arm-linux-gnu-9.2.1-3.el7.1.x86_64 > gcc-c++-aarch64-linux-gnu-9.2.1-3.el7.1.x86_64 > gcc-c++-arm-linux-gnu-9.2.1-3.el7.1.x86_64 > > (I don't remember why I have the c++ cross-compiler installed.) > Thanks for the details; I'll try it. -Dov