From: "PierreGondois" <pierre.gondois@arm.com>
To: Jeff Brasen <jbrasen@nvidia.com>, devel@edk2.groups.io
Cc: Sami.Mujawar@arm.com, Alexei.Fedorov@arm.com,
quic_llindhol@quicinc.com, ardb+tianocore@kernel.org
Subject: Re: [PATCH v2] DynamicTablesPkg: Allow multiple top level physical nodes
Date: Mon, 6 Feb 2023 10:27:36 +0100 [thread overview]
Message-ID: <cbc47834-0e59-8f4e-86cd-032e3b034433@arm.com> (raw)
In-Reply-To: <2fbd84095cc52b908f0a59d98358f36a396c319b.1675447806.git.jbrasen@nvidia.com>
Hello Jeff,
Thanks for the v2. Also cf the first discussion at:
https://edk2.groups.io/g/devel/topic/96680589#99612
- I think it would be good to extract a function that does all the checks
as there are many possibilities for the flags/parent combinations.
- I think it would also be nice to reset the index of ProcContainers
for each new level (i.e. not to have the same incrementing index for
clusters/packages)
I created a branch based on your work at:
https://github.com/pierregondois/edk2/tree/pg/top_level_pnode_Wip
Regards,
Pierre
On 2/3/23 19:10, Jeff Brasen wrote:
> In SSDT CPU topology generator allow for multiple top level physical
> nodes as would be seen with a multi-socket system. This will create a
> top level processor container for all systems.
>
> Signed-off-by: Jeff Brasen <jbrasen@nvidia.com>
> ---
> .../SsdtCpuTopologyGenerator.c | 43 ++++++-------------
> 1 file changed, 12 insertions(+), 31 deletions(-)
>
> diff --git a/DynamicTablesPkg/Library/Acpi/Arm/AcpiSsdtCpuTopologyLibArm/SsdtCpuTopologyGenerator.c b/DynamicTablesPkg/Library/Acpi/Arm/AcpiSsdtCpuTopologyLibArm/SsdtCpuTopologyGenerator.c
> index c24da8ec71..46b757e0b2 100644
> --- a/DynamicTablesPkg/Library/Acpi/Arm/AcpiSsdtCpuTopologyLibArm/SsdtCpuTopologyGenerator.c
> +++ b/DynamicTablesPkg/Library/Acpi/Arm/AcpiSsdtCpuTopologyLibArm/SsdtCpuTopologyGenerator.c
> @@ -814,7 +814,8 @@ CreateAmlProcessorContainer (
> Protocol Interface.
> @param [in] NodeToken Token of the CM_ARM_PROC_HIERARCHY_INFO
> currently handled.
> - Cannot be CM_NULL_TOKEN.
> + CM_NULL_TOKEN if top level container
> + should be created.
> @param [in] ParentNode Parent node to attach the created
> node to.
> @param [in,out] ProcContainerIndex Pointer to the current processor container
> @@ -841,12 +842,12 @@ CreateAmlCpuTopologyTree (
> AML_OBJECT_NODE_HANDLE ProcContainerNode;
> UINT32 Uid;
> UINT16 Name;
> + UINT32 NodeFlags;
>
> ASSERT (Generator != NULL);
> ASSERT (Generator->ProcNodeList != NULL);
> ASSERT (Generator->ProcNodeCount != 0);
> ASSERT (CfgMgrProtocol != NULL);
> - ASSERT (NodeToken != CM_NULL_TOKEN);
> ASSERT (ParentNode != NULL);
> ASSERT (ProcContainerIndex != NULL);
>
> @@ -893,8 +894,14 @@ CreateAmlCpuTopologyTree (
> } else {
> // If this is not a Cpu, then this is a processor container.
>
> + NodeFlags = Generator->ProcNodeList[Index].Flags;
> + // Allow physical property for top level nodes
> + if (NodeToken == CM_NULL_TOKEN) {
> + NodeFlags &= ~EFI_ACPI_6_3_PPTT_PACKAGE_PHYSICAL;
> + }
> +
I think that if (NodeToken == CM_NULL_TOKEN) and doesn't have the Physical Package
flag, no error will be triggered even though this is not a valid configuration.
> // Acpi processor Id for clusters is not handled.
> - if ((Generator->ProcNodeList[Index].Flags & PPTT_PROCESSOR_MASK) !=
> + if ((NodeFlags & PPTT_PROCESSOR_MASK) !=
> PPTT_CLUSTER_PROCESSOR_MASK)
> {
> DEBUG ((
> @@ -974,8 +981,6 @@ CreateTopologyFromProcHierarchy (
> )
> {
> EFI_STATUS Status;
> - UINT32 Index;
> - UINT32 TopLevelProcNodeIndex;
> UINT32 ProcContainerIndex;
>
> ASSERT (Generator != NULL);
> @@ -984,8 +989,7 @@ CreateTopologyFromProcHierarchy (
> ASSERT (CfgMgrProtocol != NULL);
> ASSERT (ScopeNode != NULL);
>
> - TopLevelProcNodeIndex = MAX_UINT32;
> - ProcContainerIndex = 0;
> + ProcContainerIndex = 0;
>
> Status = TokenTableInitialize (Generator, Generator->ProcNodeCount);
> if (EFI_ERROR (Status)) {
> @@ -993,33 +997,10 @@ CreateTopologyFromProcHierarchy (
> return Status;
> }
>
> - // It is assumed that there is one unique CM_ARM_PROC_HIERARCHY_INFO
> - // structure with no ParentToken and the EFI_ACPI_6_3_PPTT_PACKAGE_PHYSICAL
> - // flag set. All other CM_ARM_PROC_HIERARCHY_INFO are non-physical and
> - // have a ParentToken.
> - for (Index = 0; Index < Generator->ProcNodeCount; Index++) {
> - if ((Generator->ProcNodeList[Index].ParentToken == CM_NULL_TOKEN) &&
> - (Generator->ProcNodeList[Index].Flags &
> - EFI_ACPI_6_3_PPTT_PACKAGE_PHYSICAL))
> - {
> - if (TopLevelProcNodeIndex != MAX_UINT32) {
> - DEBUG ((
> - DEBUG_ERROR,
> - "ERROR: SSDT-CPU-TOPOLOGY: Top level CM_ARM_PROC_HIERARCHY_INFO "
> - "must be unique\n"
> - ));
> - ASSERT (0);
> - goto exit_handler;
> - }
> -
> - TopLevelProcNodeIndex = Index;
> - }
> - } // for
> -
> Status = CreateAmlCpuTopologyTree (
> Generator,
> CfgMgrProtocol,
> - Generator->ProcNodeList[TopLevelProcNodeIndex].Token,
> + CM_NULL_TOKEN,
> ScopeNode,
> &ProcContainerIndex
> );
next prev parent reply other threads:[~2023-02-06 9:27 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-03 18:10 [PATCH v2] DynamicTablesPkg: Allow multiple top level physical nodes Jeff Brasen
2023-02-06 9:27 ` PierreGondois [this message]
2023-02-13 16:10 ` Jeff Brasen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cbc47834-0e59-8f4e-86cd-032e3b034433@arm.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox