From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=134.134.136.24; helo=mga09.intel.com; envelope-from=jiewen.yao@intel.com; receiver=edk2-devel@lists.01.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id C227B2118847D for ; Tue, 6 Nov 2018 14:44:37 -0800 (PST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Nov 2018 14:44:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,473,1534834800"; d="scan'208";a="247544041" Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202]) by orsmga004.jf.intel.com with ESMTP; 06 Nov 2018 14:44:36 -0800 Received: from fmsmsx120.amr.corp.intel.com (10.18.124.208) by fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS) id 14.3.408.0; Tue, 6 Nov 2018 14:44:36 -0800 Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by fmsmsx120.amr.corp.intel.com (10.18.124.208) with Microsoft SMTP Server (TLS) id 14.3.408.0; Tue, 6 Nov 2018 14:44:36 -0800 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.84]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.117]) with mapi id 14.03.0415.000; Wed, 7 Nov 2018 06:44:34 +0800 From: "Yao, Jiewen" To: Laszlo Ersek CC: "Ni, Ruiyu" , "edk2-devel@lists.01.org" , "Dong, Eric" Thread-Topic: [PATCH] UefiCpuPkg/SmmCpu: Block SMM read-out only when static paging is used Thread-Index: AQHUdXySOJOERivmzkq+iKHhZS1tGqVCZhsAgADzf5Q= Date: Tue, 6 Nov 2018 22:44:34 +0000 Message-ID: <4F047944-D550-432F-9A60-451D9772FE43@intel.com> References: <20181106025935.102620-1-ruiyu.ni@intel.com>, In-Reply-To: Accept-Language: zh-CN, en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: MIME-Version: 1.0 Subject: Re: [PATCH] UefiCpuPkg/SmmCpu: Block SMM read-out only when static paging is used X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Nov 2018 22:44:38 -0000 Content-Language: zh-CN Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable Good suggestion Laszlo.=20 Current static paging will force: 1) only valid smm comm buffer is present. The OS memory is not present. 2) non smram is NX (no matter static or dynamic paging) 3) code region in Smm is RO (if pe image is page aligned) 4) data region in Smm is NX (if pe image is page aligned) thank you! Yao, Jiewen > =1B$B:_=1B(B 2018=1B$BG/=1B(B11=1B$B7n=1B(B7=1B$BF|!$>e8a=1B(B12:13=1B$B!= $=1B(BLaszlo Ersek =1B$B=20 >> On 11/06/18 03:59, Ruiyu Ni wrote: >> From: Jiewen Yao >>=20 >> Today's implementation blocks SMM read-out no matter static paging >> is enabled or not. But certain platform may need to read non-SMM >> content from SMM code. These platforms don't have a way to disable >> the read-out blocking. >>=20 >> The patch updates the policy to only block SMM read-out when static >> paging is enabled. So that the static paging can be disabled for >> those platforms that want SMM read-out. >>=20 >> Setting PcdCpuSmmStaticPageTable to FALSE can disable the static >> paging. >>=20 >> Contributed-under: TianoCore Contribution Agreement 1.1 >> Signed-off-by: Jiewen Yao >> Signed-off-by: Ruiyu Ni >> Cc: Eric Dong >> Cc: Jiewen Yao >> Cc: Laszlo Ersek >> --- >> UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 4 ++-- >> 1 file changed, 2 insertions(+), 2 deletions(-) >>=20 >> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c b/UefiCpuPkg/PiSmmC= puDxeSmm/X64/PageTbl.c >> index 5bb7d57238..117502dafa 100644 >> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c >> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c >> @@ -1,7 +1,7 @@ >> /** @file >> Page Fault (#PF) handler for X64 processors >>=20 >> -Copyright (c) 2009 - 2017, Intel Corporation. All rights reserved.
>> +Copyright (c) 2009 - 2018, Intel Corporation. All rights reserved.
>> Copyright (c) 2017, AMD Incorporated. All rights reserved.
>>=20 >> This program and the accompanying materials >> @@ -890,7 +890,7 @@ SmiPFHandler ( >> CpuDeadLoop (); >> } >>=20 >> - if (IsSmmCommBufferForbiddenAddress (PFAddress)) { >> + if (mCpuSmmStaticPageTable && IsSmmCommBufferForbiddenAddress (PFAd= dress)) { >> DumpCpuContext (InterruptType, SystemContext); >> DEBUG ((DEBUG_ERROR, "Access SMM communication forbidden address (= 0x%lx)!\n", PFAddress)); >> DEBUG_CODE ( >>=20 >=20 > OVMF inherits the default TRUE value for PcdCpuSmmStaticPageTable, from > "UefiCpuPkg.dec", and that's intentional. Therefore this patch should be > a no-op from OVMF's perspective. >=20 > Acked-by: Laszlo Ersek >=20 > More generally, is the use of PcdCpuSmmStaticPageTable for controlling > this kind of read-out just a convenience / simplification (in which case > I don't think it's great!), or are these topics inherently connected > somehow? >=20 > I remember that Jiewen said earlier that with "static paging" enabled > (i.e., building the page tables used in SMM all in advance), we provide > more page protection. >=20 > Also, I see that PcdCpuSmmProfileEnable can only be enabled with > PcdCpuSmmStaticPageTable set to FALSE. >=20 > So it seems that with PcdCpuSmmStaticPageTable set to TRUE, our page > fault handling in SMM is generally strict(er). This patch looks > consistent with that, but it would be nice if the commit message spelled > out why *exactly* it makes sense to use PcdCpuSmmStaticPageTable for > this new purpose as well. >=20 > (I hope my question makes sense. :) ) >=20 > Thanks > Laszlo