From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail05.groups.io (mail05.groups.io [45.79.224.7]) by spool.mail.gandi.net (Postfix) with ESMTPS id E1BC4D806C3 for ; Wed, 10 Jul 2024 13:59:04 +0000 (UTC) DKIM-Signature: a=rsa-sha256; bh=Bcpb0MxtMquU19dNkt4tjKtZuYDArTD6ubSGbj+0H+E=; c=relaxed/simple; d=groups.io; h=Date:From:To:CC:Subject:Message-ID:In-Reply-To:References:Organization:MIME-Version:Precedence:List-Subscribe:List-Help:Sender:List-Id:Mailing-List:Delivered-To:Resent-Date:Resent-From:Reply-To:List-Unsubscribe-Post:List-Unsubscribe:Content-Type:Content-Transfer-Encoding; s=20240206; t=1720619944; v=1; b=DlBTJIQAgmP28BjCRUMHoFBsFPrmySnxC9QgP2odNPwnhkI0eN6vFcwUvoehXRZYZ+N1Jer0 4jlNPIBL3AVKH774VLfOlY5uAJkRkIZlHVnDxnBb0mZFjVUqLHZjBbX21RotB92dcx4lUZjD3nO wwXXhr8k1l/pwa+suQG1n60PS4pwqNuYSZNPSUksaunPDT4Lx1NFUUdUiowB8kufgQ9+mdHcPMq KLPozlNBpY50+y194VieaifIhNCKLMrK/E3wtFD0jZlSLzo2Oj1vvbJvmJDLUNv4Kqlq1ZZxE3f iVVXdRDqk5gxMuj+aI8k4M8XvErrUAIMdwea0+Q5+vbNQ== X-Received: by 127.0.0.2 with SMTP id YngCYY7687511x9vIzJkNIfT; Wed, 10 Jul 2024 06:58:58 -0700 X-Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by mx.groups.io with SMTP id smtpd.web10.15136.1720619937258192108 for ; Wed, 10 Jul 2024 06:58:57 -0700 X-Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4WJzvl2Dssz6JBH1; Wed, 10 Jul 2024 21:57:55 +0800 (CST) X-Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id DF6A11404F9; Wed, 10 Jul 2024 21:58:53 +0800 (CST) X-Received: from localhost (10.203.174.77) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 10 Jul 2024 14:58:53 +0100 Date: Wed, 10 Jul 2024 14:58:52 +0100 From: "Jonathan Cameron via groups.io" To: Leif Lindholm CC: , Marcin Juszkiewicz , Xiong Yining , Ard Biesheuvel , Graeme Gregory , "Chen Baozi" Subject: Re: [edk2-devel] [PATCH edk2-platforms v3 4/5] SbsaQemu: provide cache info per core in PPTT Message-ID: <20240710145852.0000405a@Huawei.com> In-Reply-To: References: <20240709-acpi65-v3-0-ee93ba536fcf@linaro.org> <20240709-acpi65-v3-4-ee93ba536fcf@linaro.org> Organization: Huawei Technologies Research and Development (UK) Ltd. MIME-Version: 1.0 X-Originating-IP: [10.203.174.77] X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500005.china.huawei.com (7.191.163.240) Precedence: Bulk List-Subscribe: List-Help: Sender: devel@edk2.groups.io List-Id: Mailing-List: list devel@edk2.groups.io; contact devel+owner@edk2.groups.io Resent-Date: Wed, 10 Jul 2024 06:58:57 -0700 Resent-From: jonathan.cameron@huawei.com Reply-To: devel@edk2.groups.io,jonathan.cameron@huawei.com List-Unsubscribe-Post: List-Unsubscribe=One-Click List-Unsubscribe: X-Gm-Message-State: f0fpd8N5QuNxKrrILSFL2H7kx7686176AA= Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: quoted-printable X-GND-Status: LEGIT Authentication-Results: spool.mail.gandi.net; dkim=pass header.d=groups.io header.s=20240206 header.b=DlBTJIQA; spf=pass (spool.mail.gandi.net: domain of bounce@groups.io designates 45.79.224.7 as permitted sender) smtp.mailfrom=bounce@groups.io; dmarc=pass (policy=none) header.from=groups.io On Tue, 9 Jul 2024 14:01:53 +0100 "Leif Lindholm" wrote: > On Tue, Jul 09, 2024 at 12:47:09 +0200, Marcin Juszkiewicz wrote: > > During Linaro Connect MAD24 I was asked to move cache information from > > being 'per cluster' to be 'per core'. This is a move for implementing > > MPAM support. > >=20 > > So topology moves from: > >=20 > > Socket -> Clusters -> Cores + Caches -> Threads (if exist) > >=20 > > to: > >=20 > > Socket -> Clusters -> Cores -> Caches + Threads (if exist) > >=20 > > Cache sizes are still 32+32+512KB (L1d, L1i, L2) as QEMU does not > > implement them at all so we can tell whatever. They should match the system registers. CCSIDR etc which are provided by QEMU. Here's some old code for doing PPTT cache entry generation for arm-virt. https://lore.kernel.org/qemu-devel/20230808115713.2613-2-Jonathan.Cameron@h= uawei.com/ The numbers might happen to match what it has for the cpu you are using tho= ugh. https://elixir.bootlin.com/qemu/latest/source/target/arm/tcg/cpu64.c#L1051 For n2 that looks to be 64+64+512... > >=20 > > Signed-off-by: Marcin Juszkiewicz =20 >=20 > Reviewed-by: Leif Lindholm >=20 > / > Leif >=20 > > --- > > .../Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpiDxe.c | 47 +++++++++++-= -------- > > 1 file changed, 25 insertions(+), 22 deletions(-) > >=20 > > diff --git a/Silicon/Qemu/SbsaQemu/Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpi= Dxe.c b/Silicon/Qemu/SbsaQemu/Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpiDxe.c > > index cf0102d11f1f..a7a9664abdcb 100644 > > --- a/Silicon/Qemu/SbsaQemu/Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpiDxe.c > > +++ b/Silicon/Qemu/SbsaQemu/Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpiDxe.c > > @@ -562,8 +562,8 @@ AddPpttTable ( > > TableSize =3D sizeof (EFI_ACPI_DESCRIPTION_HEADER) + > > CpuTopo.Sockets * (sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_P= ROCESSOR) + > > CpuTopo.Clusters * (sizeof (EFI_ACPI_= 6_5_PPTT_STRUCTURE_PROCESSOR) + > > - sizeof (EFI_ACPI_= 6_5_PPTT_STRUCTURE_CACHE) * 3 + > > CpuTopo.Cores * (= sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR) + > > + = sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE) * 3 + > > = sizeof (UINT32) * 2))); > > =20 > > if (CpuTopo.Threads > 1) { > > @@ -609,10 +609,7 @@ AddPpttTable ( > > =20 > > ClusterIndex =3D SocketIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE= _PROCESSOR); > > for (ClusterNum =3D 0; ClusterNum < CpuTopo.Clusters; ClusterNum++= ) { > > - L1DCacheIndex =3D ClusterIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUC= TURE_PROCESSOR); > > - L1ICacheIndex =3D L1DCacheIndex + sizeof (EFI_ACPI_6_5_PPTT_STRU= CTURE_CACHE); > > - L2CacheIndex =3D L1ICacheIndex + sizeof (EFI_ACPI_6_5_PPTT_STRU= CTURE_CACHE); > > - CoreIndex =3D L2CacheIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUC= TURE_CACHE); > > + CoreIndex =3D ClusterIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE= _PROCESSOR); > > =20 > > // Add the Cluster PPTT structure > > EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR Cluster =3D SBSAQEMU_ACPI= _PROCESSOR_HIERARCHY_NODE_STRUCTURE_INIT ( > > @@ -624,27 +621,15 @@ AddPpttTable ( > > CopyMem (New, &Cluster, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROC= ESSOR)); > > New +=3D sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR); > > =20 > > - // Add L1 D Cache structure > > - L1DCache.CacheId =3D CacheId++; > > - CopyMem (New, &L1DCache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CAC= HE)); > > - ((EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE *)New)->NextLevelOfCache =3D= L2CacheIndex; > > - New +=3D= sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE); > > - > > - // Add L1 I Cache structure > > - L1ICache.CacheId =3D CacheId++; > > - CopyMem (New, &L1ICache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CAC= HE)); > > - ((EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE *)New)->NextLevelOfCache =3D= L2CacheIndex; > > - New +=3D= sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE); > > - > > - // Add L2 Cache structure > > - L2Cache.CacheId =3D CacheId++; > > - CopyMem (New, &L2Cache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACH= E)); > > - New +=3D sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE); > > - > > for (CoreNum =3D 0; CoreNum < CpuTopo.Cores; CoreNum++) { > > UINT32 *PrivateResourcePtr; > > UINT32 CoreCpuId; > > =20 > > + // two UINT32s for PrivateResourcePtr data > > + L1DCacheIndex =3D CoreIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCT= URE_PROCESSOR) + sizeof (UINT32) * 2; > > + L1ICacheIndex =3D L1DCacheIndex + sizeof (EFI_ACPI_6_5_PPTT_ST= RUCTURE_CACHE); > > + L2CacheIndex =3D L1ICacheIndex + sizeof (EFI_ACPI_6_5_PPTT_ST= RUCTURE_CACHE); > > + > > if (CpuTopo.Threads =3D=3D 1) { > > CoreCpuId =3D CpuId; > > } else { > > @@ -665,6 +650,23 @@ AddPpttTable ( > > PrivateResourcePtr[1] =3D L1ICacheIndex; > > New +=3D (2 * sizeof (UINT32)); > > =20 > > + // Add L1 D Cache structure > > + L1DCache.CacheId =3D CacheId++; > > + CopyMem (New, &L1DCache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_C= ACHE)); > > + ((EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE *)New)->NextLevelOfCache = =3D L2CacheIndex; > > + New += =3D sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE); > > + > > + // Add L1 I Cache structure > > + L1ICache.CacheId =3D CacheId++; > > + CopyMem (New, &L1ICache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_C= ACHE)); > > + ((EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE *)New)->NextLevelOfCache = =3D L2CacheIndex; > > + New += =3D sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE); > > + > > + // Add L2 Cache structure > > + L2Cache.CacheId =3D CacheId++; > > + CopyMem (New, &L2Cache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CA= CHE)); > > + New +=3D sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE); > > + > > if (CpuTopo.Threads =3D=3D 1) { > > CpuId++; > > } else { > > @@ -685,6 +687,7 @@ AddPpttTable ( > > } > > =20 > > CoreIndex +=3D sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR) = + sizeof (UINT32) * 2; > > + CoreIndex +=3D sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE) * 3; > > } > > =20 > > ClusterIndex =3D CoreIndex; > >=20 > > --=20 > > 2.45.2 > > =20 >=20 >=20 >=20 >=20 >=20 -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#119865): https://edk2.groups.io/g/devel/message/119865 Mute This Topic: https://groups.io/mt/107120146/7686176 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io] -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-