From: "Marcin Juszkiewicz" <marcin.juszkiewicz@linaro.org>
To: devel@edk2.groups.io
Cc: Leif Lindholm <quic_llindhol@quicinc.com>,
Ard Biesheuvel <ardb+tianocore@kernel.org>,
Graeme Gregory <graeme@xora.org.uk>,
Chen Baozi <chenbaozi@phytium.com.cn>,
Xiong Yining <xiongyining1480@phytium.com.cn>,
Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Subject: [edk2-devel] [PATCH edk2-platforms v2 3/3] SbsaQemu: provide cache info per core in PPTT
Date: Tue, 02 Jul 2024 18:33:04 +0200 [thread overview]
Message-ID: <20240702-acpi65-v2-3-3cb18a892221@linaro.org> (raw)
In-Reply-To: <20240702-acpi65-v2-0-3cb18a892221@linaro.org>
During Linaro Connect MAD24 I was asked to move cache information from
being 'per cluster' to be 'per core'. This is a move for implementing
MPAM support.
So topology moves from:
Socket -> Clusters -> Cores + Caches -> Threads (if exist)
to:
Socket -> Clusters -> Cores -> Caches + Threads (if exist)
Cache sizes are still 32+32+512KB (L1d, L1i, L2) as QEMU does not
implement them at all so we can tell whatever.
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
---
.../Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpiDxe.c | 46 ++++++++++----------
1 file changed, 24 insertions(+), 22 deletions(-)
diff --git a/Silicon/Qemu/SbsaQemu/Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpiDxe.c b/Silicon/Qemu/SbsaQemu/Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpiDxe.c
index 4c275faf7de6..02c84a16a4bc 100644
--- a/Silicon/Qemu/SbsaQemu/Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpiDxe.c
+++ b/Silicon/Qemu/SbsaQemu/Drivers/SbsaQemuAcpiDxe/SbsaQemuAcpiDxe.c
@@ -561,8 +561,8 @@ AddPpttTable (
TableSize = sizeof (EFI_ACPI_DESCRIPTION_HEADER) +
CpuTopo.Sockets * (sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR) +
CpuTopo.Clusters * (sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR) +
- sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE) * 3 +
CpuTopo.Cores * (sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR) +
+ sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE) * 3 +
sizeof (UINT32) * 2)));
if (CpuTopo.Threads > 1) {
@@ -604,37 +604,21 @@ AddPpttTable (
ClusterIndex = SocketIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR);
for (ClusterNum = 0; ClusterNum < CpuTopo.Clusters; ClusterNum++) {
- L1DCacheIndex = ClusterIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR);
- L1ICacheIndex = L1DCacheIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
- L2CacheIndex = L1ICacheIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
- CoreIndex = L2CacheIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
+ CoreIndex = ClusterIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR);
// Add the Cluster PPTT structure
EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR Cluster = SBSAQEMU_ACPI_PROCESSOR_HIERARCHY_NODE_STRUCTURE_INIT (ClusterFlags, SocketIndex, 0, 0);
CopyMem (New, &Cluster, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR));
New += sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR);
- L1DCache.CacheId = CacheId++;
- // Add L1 D Cache structure
- CopyMem (New, &L1DCache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE));
- ((EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE *)New)->NextLevelOfCache = L2CacheIndex;
- New += sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
-
- L1ICache.CacheId = CacheId++;
- // Add L1 I Cache structure
- CopyMem (New, &L1ICache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE));
- ((EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE *)New)->NextLevelOfCache = L2CacheIndex;
- New += sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
-
- L2Cache.CacheId = CacheId++;
- // Add L2 Cache structure
- CopyMem (New, &L2Cache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE));
- New += sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
-
for (CoreNum = 0; CoreNum < CpuTopo.Cores; CoreNum++) {
UINT32 *PrivateResourcePtr;
UINT32 CoreCpuId;
+ L1DCacheIndex = CoreIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR);
+ L1ICacheIndex = L1DCacheIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
+ L2CacheIndex = L1ICacheIndex + sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
+
if (CpuTopo.Threads == 1) {
CoreCpuId = CpuId;
} else {
@@ -650,6 +634,23 @@ AddPpttTable (
PrivateResourcePtr[1] = L1ICacheIndex;
New += (2 * sizeof (UINT32));
+ L1DCache.CacheId = CacheId++;
+ // Add L1 D Cache structure
+ CopyMem (New, &L1DCache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE));
+ ((EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE *)New)->NextLevelOfCache = L2CacheIndex;
+ New += sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
+
+ L1ICache.CacheId = CacheId++;
+ // Add L1 I Cache structure
+ CopyMem (New, &L1ICache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE));
+ ((EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE *)New)->NextLevelOfCache = L2CacheIndex;
+ New += sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
+
+ L2Cache.CacheId = CacheId++;
+ // Add L2 Cache structure
+ CopyMem (New, &L2Cache, sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE));
+ New += sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE);
+
if (CpuTopo.Threads == 1) {
CpuId++;
} else {
@@ -665,6 +666,7 @@ AddPpttTable (
}
CoreIndex += sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_PROCESSOR) + sizeof (UINT32) * 2;
+ CoreIndex += sizeof (EFI_ACPI_6_5_PPTT_STRUCTURE_CACHE) * 3;
}
ClusterIndex = CoreIndex;
--
2.45.2
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#119765): https://edk2.groups.io/g/devel/message/119765
Mute This Topic: https://groups.io/mt/107003195/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
prev parent reply other threads:[~2024-07-02 16:33 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-02 16:33 [edk2-devel] [PATCH edk2-platforms v2 0/3] SbsaQemu: Align the PPTT tables with QEMU Marcin Juszkiewicz
2024-07-02 16:33 ` [edk2-devel] [PATCH edk2-platforms v2 1/3] Platform/SbsaQemu: get the information of CPU topology via SMC calls Marcin Juszkiewicz
2024-07-02 16:33 ` [edk2-devel] [PATCH edk2-platforms v2 2/3] Silicon/SbsaQemu: align the PPTT tables with QEMU Marcin Juszkiewicz
2024-07-04 11:57 ` Leif Lindholm
2024-07-02 16:33 ` Marcin Juszkiewicz [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240702-acpi65-v2-3-3cb18a892221@linaro.org \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox