From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.groups.io with SMTP id smtpd.web10.10542.1675328417287348490 for ; Thu, 02 Feb 2023 01:00:17 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bnAEZeso; spf=pass (domain: redhat.com, ip: 170.10.129.124, mailfrom: kraxel@redhat.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675328415; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=qP1fzrrFmK7S2dsviU9hs7yJrHEg+AIdPG3vgU2aMmk=; b=bnAEZesotepY6vvGB9rvJGckrRfUbzMSI3OtIKGL2gZREq9ovt72qN7JDw9hq2q75qCEC8 tEEotmpGy5lSaQlO0yUoDcaaDmqO708VbtyGAWazRTur06CnkCn75pCMVk3H2fd36eV00q JXD40A0GOT8BmcOozQw8OYusBba9rco= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-385-aRBzEeBLN66zS9Q8bb_32Q-1; Thu, 02 Feb 2023 04:00:12 -0500 X-MC-Unique: aRBzEeBLN66zS9Q8bb_32Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F2D19100DEA5; Thu, 2 Feb 2023 09:00:09 +0000 (UTC) Received: from sirius.home.kraxel.org (unknown [10.39.192.85]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AEA91404BEC0; Thu, 2 Feb 2023 09:00:09 +0000 (UTC) Received: by sirius.home.kraxel.org (Postfix, from userid 1000) id 0F90F180061B; Thu, 2 Feb 2023 10:00:03 +0100 (CET) Date: Thu, 2 Feb 2023 10:00:03 +0100 From: "Gerd Hoffmann" To: "Wu, Jiaxin" Cc: "Ni, Ray" , "devel@edk2.groups.io" , "Dong, Eric" , "Zeng, Star" , Laszlo Ersek , "Kumar, Rahul R" Subject: Re: [PATCH v3 5/5] OvmfPkg/SmmCpuFeaturesLib: Skip SMBASE configuration Message-ID: <20230202090003.5vmmeyhsv4zn7wn4@sirius.home.kraxel.org> References: <20230118095620.9860-1-jiaxin.wu@intel.com> <20230118095620.9860-6-jiaxin.wu@intel.com> <20230118121958.cxbfh3fljedvebis@sirius.home.kraxel.org> <20230119075303.nkyno36h25xscwkn@sirius.home.kraxel.org> <20230201134051.7jlc7a74cogcskw5@sirius.home.kraxel.org> MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi, > > But the serialized SMBASE programming still happens, now in the PEI > > module, and I don't see a good reason why the runtime the new PEI module > > and the runtime of PiSmmCpuDxeSmm combined is faster than before. > > As I said, PEI module can also programs SMBASE in parallel, for > example program the some register directly instead of depending the > existing RSM instruction to reload the SMBASE register with the new > allocated SMBASE each time when it exits SMM. Ok. So new Intel processors apparently got new MSR(s) to set SMBASE directly. Any specific reason why you don't add support for that to PiSmmCpuDxeSmm? That would avoid needing the new HOB (and the related problems with the 8190 cpu limit) in the first place. > Different vender might > has different implementation. Yes. We can have different implementations in PiSmmCpuDxeSmm and/or SmmCpuFeaturesLib to handle that. > Another benefit with this series will make the smbase relocation more > independent & more simple compared with existing process in smm cpu > driver. Maybe it is. Hard to justify from outside if you are not willing to show the code of the PEI module. > > Do you intent submitting code for OVMF producing such a HOB? > > There isn't any in this series. > > No, we won't do that. Then there is no point in changing the OVMF code, other than maybe adding an ASSERT that there is no such HOB. > > How is the SMM initialization of hotplugged CPUs > > supposed to work with the new mode of operation? > > Yes, that's already considered. For hot plugged CPU supported, the max > number of CPUs should be defined in the > PcdCpuMaxLogicalProcessorNumber, and all CPU Hot Plug Data is recorded > in the PcdCpuHotPlugDataAddress, which contains the smbase info > pre-allocated in the hob (filling the value in pi smm cpu driver), > then CPU Hotplug MMI handler will relocate the SMBASE for new CPUs. So that keeps the old workflow. take care, Gerd