From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mx.groups.io with SMTP id smtpd.web12.16256.1629903111900811352 for ; Wed, 25 Aug 2021 07:51:52 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Qbgzjyh0; spf=pass (domain: redhat.com, ip: 216.205.24.124, mailfrom: kraxel@redhat.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1629903111; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=DNL6VJZJFUEAH9Bbi/rXUXin/uoGzLr6bbWbSc1wsy4=; b=Qbgzjyh0mclgoO6cD6OzIJCksnt2Epk62ZFYkrokPtKgnhiGpcqax14Qw8w2zqSCYa9yZZ 7YwWv07xsY77DqWoddmi5XeLSiZIBS6XAxqX3hVZUkvRGLmBZ2q9JRqglm0w0qf0tC+Mv/ D1adDCBbQ3mm+Q2OGFBbMBHSfp/Cs8k= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-559-PeCtbBLnO0qnOXHo0_TnXQ-1; Wed, 25 Aug 2021 10:51:47 -0400 X-MC-Unique: PeCtbBLnO0qnOXHo0_TnXQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DBFDF94DCD; Wed, 25 Aug 2021 14:51:45 +0000 (UTC) Received: from sirius.home.kraxel.org (unknown [10.39.192.91]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 4C2CA5DA61; Wed, 25 Aug 2021 14:51:45 +0000 (UTC) Received: by sirius.home.kraxel.org (Postfix, from userid 1000) id 696181800903; Wed, 25 Aug 2021 16:51:43 +0200 (CEST) Date: Wed, 25 Aug 2021 16:51:43 +0200 From: "Gerd Hoffmann" To: "Yao, Jiewen" Cc: "devel@edk2.groups.io" , Ard Biesheuvel , "Xu, Min M" , Ard Biesheuvel , "Justen, Jordan L" , Brijesh Singh , Erdem Aktas , James Bottomley , Tom Lendacky Subject: Re: [edk2-devel] [PATCH 18/23] OvmfPkg: Enable Tdx in SecMain.c Message-ID: <20210825145143.rp3gqcqzd6fktkjk@sirius.home.kraxel.org> References: <95f116893a4a17c7e0966e240a650f871c9f9392.1628767741.git.min.m.xu@intel.com> <20210819064937.o646vxjebwzgfgoz@sirius.home.kraxel.org> <20210820072253.plne3mudm3dj6777@sirius.home.kraxel.org> <20210825075218.mpmkcwu3zo6tykm2@sirius.home.kraxel.org> MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kraxel@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi, > > > In TDVF design, we choose the use TDX defined initial pointer to pass > > > the initial memory information - TD_HOB, instead of CMOS region. > > > Please help me understand what is the real concern here. > > > > Well, qemu settled to the fw_cfg design or a number of reasons. It is > > pretty flexible for example. The firmware can ask for the information > > it needs at any time and can store it as it pleases. > > > > I'd suggest to not take it for granted that an additional alternative > > way to do basically the same thing will be accepted to upstream qemu. > > Submit your patches to qemu-devel to discuss that. > > [Jiewen] I think Intel Linux team is doing that separately. Please ask them to send the patches. Changes like this obviously need coordination and agreement between qemu and edk2 projects, and ideally both guest and host code is reviewed in parallel. > > Most fw_cfg entries are constant anyway, so we can easily avoid a second > > call by caching the results of the first call if that helps TDVF. > > [Jiewen] It is possible. We can have multiple ways: > 1) Per usage cache. However, that means every driver need use its own way to cache the data, either PCD or HOB in PEI phase. Also driver A need to know clearly that driver B will use the same data, then it will cache otherwise it will not cache. I treat it as a huge burden for the developer. > 2) Always cache per driver. That means every driver need follow the same pattern: search cache, if miss the get it and cache it. But it still cannot guarantee the data order in different path architecturally. > 3) Always cache in one common driver. One driver can get all data one time and cache them. That can resolve the data order problem. I am not sure if that is desired. But I cannot see too much difference between passing data at entry point. Not investigated yet. seabios fw_cfg handling is close to (3) for small items (not kernel or initrd or other large data sets) so I think I would look into that first. > > > Using HOB in the initial pointer can be an alternative pattern to > > > mitigate such risk. We just need measure them once then any component > > > can use that. Also, it can help the people to evaluate the RTMR hash > > > and TD event log data for the configuration in attestation flow, > > > because the configuration is independent with the code execution flow. > > > > Well, it covers only the memory map, correct? All other configuration > > is still loaded from fw_cfg. I can't see the improvement here. > > [Jiewen] At this point of time, memory map is the most important > parameter in the TD Hob, because we do need the memory information at > the TD entrypoint. That is mandatory for any TD boot. Well, I can see that the memory map is kind of special here because you need that quite early in the firmware initialization workflow. > The fw_cfg is still allowed in the TDVF design guide, just because we > feel it is a burden to convert everything suddenly. What is the longer-term plan here? Does it make sense to special-case the memory map? If we want handle other fw_cfg items that way too later on, shouldn't we better check how we can improve the fw_cfg interface so it works better with confidential computing? > > How do you pass the HOB to the guest? Copy data to guest ram? Map a > > ro page into guest address space? What happens on VM reset? > [Jiewen] Yes, VMM will prepare the memory information based upon TDVF > metadata. The VMM need copy TD HOB data to a predefined memory region > according to TDVF metadata. Is all that documented somewhere? The TVDF design overview focuses on the guest/firmware side of things, so it isn't very helpful here. Did I mention posting the qemu patches would be a good idea? > I don't fully understand the VM reset question. I try to answer. But if that is not what you are asking, please clarify. What happens if you reboot the guest? On non-TDX guests the VM will be reset, the cpu will jump to the reset vector (executing from rom / flash), firmware will re-initialize everything and re-load any config information it needs from fw_cfg > This action that VMM initiates TD HOB happens when the VMM launches a TD guest. > After that the region will becomes TD private memory and own by TD. VMM can no longer access it (no read/no write). > If VM reset, then this memory is gone. > If VMM need launch a new TD, the VMM need initiate the data again. Sounds like reset is not supported, you need to stop and re-start the guest instead. Is that correct? take care, Gerd