From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.groups.io with SMTP id smtpd.web10.69596.1674202891660043936 for ; Fri, 20 Jan 2023 00:21:32 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fcx7iDl7; spf=pass (domain: redhat.com, ip: 170.10.133.124, mailfrom: lersek@redhat.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674202890; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H4ErEinhSwg7JD5R6bR0HaRt5FeyyjW5JGPL0t+TxLs=; b=fcx7iDl7+S2C8kUBIO7ILrdMiL0mxwEFc+FLpD7oM432jALcgAWvx0oRc0in11F524YlA+ iRAy9V7e4eOANEqYAxdDQ8ql2MD14rFeqWf1iuuzZ+LYr1+Wu/zeJom432gyjXCjTpwlgj S+bSKAss/qAyFbSkG9dOuARtcb402JM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-322-GWFWTKn-P_WFEZUOm8VjJw-1; Fri, 20 Jan 2023 03:21:26 -0500 X-MC-Unique: GWFWTKn-P_WFEZUOm8VjJw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0ABC3811E6E; Fri, 20 Jan 2023 08:21:26 +0000 (UTC) Received: from [10.39.192.173] (unknown [10.39.192.173]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 648441121315; Fri, 20 Jan 2023 08:21:24 +0000 (UTC) Message-ID: <8142cc40-ca21-2748-a3de-d0432ccbdc07@redhat.com> Date: Fri, 20 Jan 2023 09:21:22 +0100 MIME-Version: 1.0 Subject: Re: [PATCH v3 1/5] UefiCpuPkg/SmmBaseHob.h: Add SMM Base HOB Data To: "Ni, Ray" , Gerd Hoffmann , "Wu, Jiaxin" Cc: "devel@edk2.groups.io" , "Dong, Eric" , "Zeng, Star" , "Kumar, Rahul R" , "Kinney, Michael D" , "Zimmer, Vincent" References: <20230118095620.9860-1-jiaxin.wu@intel.com> <20230118095620.9860-2-jiaxin.wu@intel.com> <20230118111913.xgjlxdhngzwhvf76@sirius.home.kraxel.org> From: "Laszlo Ersek" In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 1/18/23 16:06, Ni, Ray wrote: > #pragma pack(1) > typedef struct { > UINT32 CpuIndex; > UINT32 NumberOfCpus; // align to EFI_SEC_PLATFORM_INFORMATION_RECORD2.NumberOfCpus > UINT64 SmBase[]; > } SMM_BASE_HOB_DATA; > #pragma pack() > > For system with less than 8K CPUs, one HOB is produced. CpuIndex is set to 0 indicating > the HOB describes the CPU from 0 to NumberOfCpus-1. > > The HOB list may contains multiple such HOB instances each describing the information for > CPU from CpuIndex to CpuIndex + NumberOfCpus - 1. > The instance order in the HOB list is random so consumer cannot assume the CpuIndex > of first instance is 0. When using discontiguous blocks that are limited to ~64KB each: - The consumer won't be able to access elements of the "conceptual" big array in a truly random (= random-access) fashion. There won't be a single contiguous domain of valid subscripts. It's "bank switching", and switching banks should be avoided IMO. It effectively replaces the vector data structure with a linked list. The consequence is that the consumer will have to either (a) build a new (temporary, or permanent) index table of sorts, for implementing the "conceptual" big array as a factual contiguous array, or (b) traverse the HOB list multiple times. - If the element size of the array increases (which is otherwise possible to do compatibly, e.g. by placing a GUID and/or revision# in the HOB), the number of elements that fit in a single HOB decreases. I think that's an artifact that needlessly complicates debugging, and maybe performance too (it might increase the number of full-list traversals). - With relatively many elements fitting into a single HOB, on most platforms, just one HOB is going to be used. While that may be good for performance, it is not good for code coverage (testing). The quirky indexing method will not be exercised by most platforms. What's wrong with: - restricting the creation of all such HOBs after "gEfiPeiMemoryDiscoveredPpiGuid" - using a page allocation, for representing the array contiguously - in the HOB, only storing the address of the page allocation. Page allocations performed after gEfiPeiMemoryDiscoveredPpiGuid will not be moved, so the address in the HOB would be stable. Laszlo