From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail03.groups.io (mail03.groups.io [45.79.227.220]) by spool.mail.gandi.net (Postfix) with ESMTPS id 881A4D80078 for ; Thu, 11 Apr 2024 03:19:04 +0000 (UTC) DKIM-Signature: a=rsa-sha256; bh=qx1zKExmmm1HEPpBX2wJQQ+wQsTOlwe5cDi0ln/a02A=; c=relaxed/simple; d=groups.io; h=From:To:CC:Subject:Thread-Topic:Thread-Index:Date:Message-ID:References:In-Reply-To:Accept-Language:msip_labels:MIME-Version:Precedence:List-Subscribe:List-Help:Sender:List-Id:Mailing-List:Delivered-To:Resent-Date:Resent-From:Reply-To:List-Unsubscribe-Post:List-Unsubscribe:Content-Language:Content-Type; s=20240206; t=1712805543; v=1; b=jT310/lBlpK1GyPh5B6OXAcbCwBLShZ/Kci2gxbqX/G1oQXEEarXEROzxVcA2EKgo3y3uYkA DxEwhlimd2kyeQVH29DDIYo7wGxsOqccm/yeWttzke8kYTdQ35pKBuLsj83lboELcFeea2rWbmZ gpLzHI8Sqg6bqz4QU2UZzFb9pQ8WH8PXAm6+OfUfcXjb2EMMXRPwVDb0V8klaQMgIt9cblpKZ05 Rmbj8lCmm1p8mOhc7Kd0EU7wYD52pR/bGtcSeAsYlAzP9pcrsu9xYmXrVDR2Jr+MWtghYhClC/F lxT+GjNJEodkNX9yOFNK4Iu/H89qkdbqBMIR6PHIULtRw== X-Received: by 127.0.0.2 with SMTP id GLy2YY7687511xUSHIjyn0po; Wed, 10 Apr 2024 20:19:03 -0700 X-Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by mx.groups.io with SMTP id smtpd.web10.9061.1712805542373442204 for ; Wed, 10 Apr 2024 20:19:02 -0700 X-CSE-ConnectionGUID: jMF2l4CiTTCjsLiQfsl9Mw== X-CSE-MsgGUID: 2KphL322QfqDv1hqex+Rww== X-IronPort-AV: E=McAfee;i="6600,9927,11039"; a="8306925" X-IronPort-AV: E=Sophos;i="6.07,192,1708416000"; d="scan'208,217";a="8306925" X-Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Apr 2024 20:19:01 -0700 X-CSE-ConnectionGUID: FDNk+F2IRWavWkLiJ2anFg== X-CSE-MsgGUID: Lw8BDYVnQWi6kYOLsHMIog== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,192,1708416000"; d="scan'208,217";a="44052302" X-Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81]) by fmviesa002.fm.intel.com with ESMTP/TLS/AES256-GCM-SHA384; 10 Apr 2024 20:19:01 -0700 X-Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 10 Apr 2024 20:19:01 -0700 X-Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35 via Frontend Transport; Wed, 10 Apr 2024 20:19:01 -0700 X-Received: from NAM02-BN1-obe.outbound.protection.outlook.com (104.47.51.41) by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 10 Apr 2024 20:19:00 -0700 X-Received: from MN6PR11MB8244.namprd11.prod.outlook.com (2603:10b6:208:470::14) by IA1PR11MB6444.namprd11.prod.outlook.com (2603:10b6:208:3a7::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.25; Thu, 11 Apr 2024 03:18:55 +0000 X-Received: from MN6PR11MB8244.namprd11.prod.outlook.com ([fe80::2c31:82b7:9f26:5817]) by MN6PR11MB8244.namprd11.prod.outlook.com ([fe80::2c31:82b7:9f26:5817%5]) with mapi id 15.20.7430.045; Thu, 11 Apr 2024 03:18:55 +0000 From: "Ni, Ray" To: "Wu, Jiaxin" , "devel@edk2.groups.io" CC: "Zeng, Star" , Gerd Hoffmann , "Kumar, Rahul R" Subject: Re: [edk2-devel] [PATCH v1 02/13] UefiCpuPkg/SmmRelocationLib: Add SmmRelocationLib library instance Thread-Topic: [PATCH v1 02/13] UefiCpuPkg/SmmRelocationLib: Add SmmRelocationLib library instance Thread-Index: AQHai08QJc5yzJUMEkK5efzwznzw/7FiZw5A Date: Thu, 11 Apr 2024 03:18:55 +0000 Message-ID: References: <20240410135724.15344-1-jiaxin.wu@intel.com> <20240410135724.15344-3-jiaxin.wu@intel.com> In-Reply-To: <20240410135724.15344-3-jiaxin.wu@intel.com> Accept-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: x-ms-publictraffictype: Email x-ms-traffictypediagnostic: MN6PR11MB8244:EE_|IA1PR11MB6444:EE_ x-ms-office365-filtering-correlation-id: 86da00c2-0f41-48be-1d66-08dc59d61ee1 x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam-message-info: N7F4LShYutey+R/COPcjl3zwianPaqt/lZ0ImHlNlKfK/KvkdWFm/qMSFMt3cg2cq3jJZTRzG2KrdgMDszO/r7e/H9S5wYQ0i23JX9OtFYHhGw5ZdC+VZnbcNtW9wsy5RLLaCtseYWiLl8e8/906l22pe9fo4alpBvRNIbklCeto6ttEx8bcbePoYrOanXTlN86GvWkLO+1iagwDE9pTqNUGSgIvgh83tIEasWuGnjvqYfGNZNTOjahATVTVNFUPOVQ5xwM0LaVujEfUJ6NnMMfNyhZzMgN6/BKxBRr01euch7WTNvfdxspe2DonNqQJL0e4qRfReQiHB0lZZs9sbjayt8fkt7tEwX+IvDgIlFzd/CM3XlGYhCLun9z0RlLQ0QpBme6OdozW1fFdFzSyoeaTtmZSSegiLvWb1yXiGc50mDWJu4E+Hign/9F1+WID3YiF8QEJbN61AZ/vJsr4bK4PIUBA4qIpi5w7lHNeZNtLLOm6Rf8u4QreHFmTcCTjrVtaZE8obm/GOkOmQeX7sCvuE1HImld8ZPgA+Xh/i+rfRxGSzzEDB6OUR3Nycb1nD7Q+CLMvGG2Ve5hLf5qclVe4SDgwJ8GfGuk+kY1acpNru+6AKeacfrE1QVCrovgWKoCjtZ8NPLJ2wngYAxqBcpQwMdHxNOpZI3pvoCT7ZGTksqwV/OxLTrYU+UmWRbq6vmHo388eCl3oDf6jegSltA== x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?L/q4bAekEgb4DkxL+5W9HugjULOGLAFPU3ERqhC16hy772TlQoFqg5J44hh5?= =?us-ascii?Q?/D/jYYMdTHdEFQTSoa0NLy4jKMTsy81y9MKhGli0DrUzWTcA6tzRsXrfayqq?= =?us-ascii?Q?PhMGzgAPrH7xJPQrswDNWa+CJpxvGFSPjPRBIjKnmHwelZ3K2rl+II9RJoQm?= =?us-ascii?Q?pDOD6b0rjHWxPhlkwn1H9Y/icornaeRfhwiFibGoNGj3KWAbj1lTvqekOKwb?= =?us-ascii?Q?NxLTk7Siti1BHBFYcgU/UUul4xDGMF1iWDxQKMLIjHsO8haf8/DIWGXIS4QY?= =?us-ascii?Q?V3OC2+fH9Wrf7X/wojuZTTcqlArCwMmWnhMB4h8XwT4yuspI4dJGFNKeaQ5q?= =?us-ascii?Q?4YrEifLRJgHlFvGk12cl4wxQDoiK200tqIRLD9xgmC+AygfixAlwUEmVAwfj?= =?us-ascii?Q?DxaDBXwJyZTMoNTXwNWB0ZMDX7j0x+AWSZNtGcwDYXeUFpNvjTgttI6cW93K?= =?us-ascii?Q?irDryNp+ABvag0kjkxcsEkVKtCNYh4vNUycsX3qYCKUk43aUlRoa8UCBVGi4?= =?us-ascii?Q?UJDPKdOQML7KN8q0e0WfOP4/GvvqOkLXXsJ6oOpyebrgtZCCbBQp9niqLMZn?= =?us-ascii?Q?smzaiZor6nsxEVb/SChxI+y+JjpYozLw0O+MUw1ksu8Xjc550kHtuZWCf3Qw?= =?us-ascii?Q?567q2qTcW7GyjDP2NIOMcrq2kKgwFYogT17hiI47CrCOmbMaTo5r4dEpXxSR?= =?us-ascii?Q?CnxgX1OOox1plkhoLdEPLh1FaethNYUHoJdJqdINrXogU2/APeBBzHY8uGSo?= =?us-ascii?Q?nkv03eU+cqYWVoNWS/jCz2ogLEbLjjsFsEFz5oHGTeYGzlfx8+jHHpfj+5Ic?= =?us-ascii?Q?QgN9dYnBL+IxR+55prJAx5JpRSHSttvBavYAXJoHEADWDvRwIsQnwN+sAhNv?= =?us-ascii?Q?mbliU6SVn94qdyuX2PAGii4k91CX5cwGNxVR7acdE4dcDqNnY0MnnnNv70ui?= =?us-ascii?Q?ip4VWizG6AF64AGz6KAMwTEWdmJz475k3HAB1zsCNnRcAOPgZCpLtdg3yjMP?= =?us-ascii?Q?0Chi/m8JHkDN92c80/iIFCzSRcf6CDSyk+qHW8DL+q41U14OatQ1XqOhMXyp?= =?us-ascii?Q?pRX92RpbM3iyrumUlXj+AOSyJeovNC6MncbBlsIeVmVuKxIFVbavcTPigE6L?= =?us-ascii?Q?XeR5cxo4HRsQllkg5IalHaZnuUlGQ4z54u4EoDfCbWQUUGyg+EK3iU6oq0kX?= =?us-ascii?Q?YcswHRv8FEEF+5DgkWSsNLboxpsL5j9mev2i6jc3vM+slGxjbAYEBQjeQkdl?= =?us-ascii?Q?q3SaWUUwWsEgHjEd7XmCAhBtH0D085nqFPatSoewtvxi4+k9JXMsw/rVu/bS?= =?us-ascii?Q?mM+usN1eHOWlMLVUeQPFdW1LUy6PF13zVXu/A/pGdkqs8w7xLBMewxq3L3Yj?= =?us-ascii?Q?m6c+qfqB5KWuW66k/UpJhErrBwszW8O/MOxi0Y9nXBXY2MJvEdZlF5Qnyx8c?= =?us-ascii?Q?k2JQ0MUBep/819aTIEowUDgsohQZ6fmaVljHRo2hPGLscpXLci4wHEZ+W6f4?= =?us-ascii?Q?+DmwpHK4wD4Z0lbarcoVBTB4befop/QsZsmgwaUb8wwBCkPGS+yYGFz2JLkj?= =?us-ascii?Q?WDP0sJYTuDuIgPmHFM0=3D?= MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MN6PR11MB8244.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 86da00c2-0f41-48be-1d66-08dc59d61ee1 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Apr 2024 03:18:55.5538 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: cnd9EZZ9D1NqiS9nc4Mh6HZ6+jTs9hH0dhtR+8j06saLa8o4G1qp+wYSjciF6mDLY5ZpUHT7SEdT/OZU5XGRQg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR11MB6444 X-OriginatorOrg: intel.com Precedence: Bulk List-Subscribe: List-Help: Sender: devel@edk2.groups.io List-Id: Mailing-List: list devel@edk2.groups.io; contact devel+owner@edk2.groups.io Resent-Date: Wed, 10 Apr 2024 20:19:02 -0700 Resent-From: ray.ni@intel.com Reply-To: devel@edk2.groups.io,ray.ni@intel.com List-Unsubscribe-Post: List-Unsubscribe=One-Click List-Unsubscribe: X-Gm-Message-State: T3K1sIhgfFiIlVIvbKLTiONqx7686176AA= Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_MN6PR11MB824463D2573B81AE262F338E8C052MN6PR11MB8244namp_" X-GND-Status: LEGIT Authentication-Results: spool.mail.gandi.net; dkim=pass header.d=groups.io header.s=20240206 header.b="jT310/lB"; spf=pass (spool.mail.gandi.net: domain of bounce@groups.io designates 45.79.227.220 as permitted sender) smtp.mailfrom=bounce@groups.io; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=intel.com (policy=none) --_000_MN6PR11MB824463D2573B81AE262F338E8C052MN6PR11MB8244namp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Not sure if "--find-copies-harder" helps to detect that the new files are c= opies of existing files. Can you try in next version if some comments request a new version of patch= ? Thanks, Ray ________________________________ From: Wu, Jiaxin Sent: Wednesday, April 10, 2024 21:57 To: devel@edk2.groups.io Cc: Ni, Ray ; Zeng, Star ; Gerd Hoff= mann ; Kumar, Rahul R Subject: [PATCH v1 02/13] UefiCpuPkg/SmmRelocationLib: Add SmmRelocationLib= library instance This patch separates the smbase relocation logic from PiSmmCpuDxeSmm driver, and moves to the SmmRelocationInit interface. Platform shall consume the interface for the smbase relocation if need SMM support. Cc: Ray Ni Cc: Zeng Star Cc: Gerd Hoffmann Cc: Rahul Kumar Signed-off-by: Jiaxin Wu --- .../Library/SmmRelocationLib/Ia32/Semaphore.c | 43 ++ .../Library/SmmRelocationLib/Ia32/SmmInit.nasm | 157 +++++ .../SmmRelocationLib/InternalSmmRelocationLib.h | 141 +++++ .../Library/SmmRelocationLib/SmmRelocationLib.c | 659 +++++++++++++++++= ++++ .../Library/SmmRelocationLib/SmmRelocationLib.inf | 61 ++ .../SmmRelocationLib/SmramSaveStateConfig.c | 91 +++ .../Library/SmmRelocationLib/X64/Semaphore.c | 70 +++ .../Library/SmmRelocationLib/X64/SmmInit.nasm | 207 +++++++ 8 files changed, 1429 insertions(+) create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore.c create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocati= onLib.h create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.c create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.in= f create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfi= g.c create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.c create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.nasm diff --git a/UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore.c b/UefiCpu= Pkg/Library/SmmRelocationLib/Ia32/Semaphore.c new file mode 100644 index 0000000000..ace3221cfc --- /dev/null +++ b/UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore.c @@ -0,0 +1,43 @@ +/** @file + Semaphore mechanism to indicate to the BSP that an AP has exited SMM + after SMBASE relocation. + + Copyright (c) 2024, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "InternalSmmRelocationLib.h" + +UINTN mSmmRelocationOriginalAddress; +volatile BOOLEAN *mRebasedFlag; + +/** + Hook return address of SMM Save State so that semaphore code + can be executed immediately after AP exits SMM to indicate to + the BSP that an AP has exited SMM after SMBASE relocation. + + @param[in] CpuIndex The processor index. + @param[in] RebasedFlag A pointer to a flag that is set to TRUE + immediately after AP exits SMM. + +**/ +VOID +SemaphoreHook ( + IN UINTN CpuIndex, + IN volatile BOOLEAN *RebasedFlag + ) +{ + SMRAM_SAVE_STATE_MAP *CpuState; + + mRebasedFlag =3D RebasedFlag; + + CpuState =3D (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + SMRAM_= SAVE_STATE_MAP_OFFSET); + + mSmmRelocationOriginalAddress =3D (UINTN)HookReturnFromSmm ( + CpuIndex, + CpuState, + (UINT64)(UINTN)&SmmRelocationSe= maphoreComplete, + (UINT64)(UINTN)&SmmRelocationSe= maphoreComplete + ); +} diff --git a/UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm b/UefiCp= uPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm new file mode 100644 index 0000000000..cb8b030693 --- /dev/null +++ b/UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm @@ -0,0 +1,157 @@ +;-------------------------------------------------------------------------= ----- ; +; Copyright (c) 2024, Intel Corporation. All rights reserved.
+; SPDX-License-Identifier: BSD-2-Clause-Patent +; +; Module Name: +; +; SmmInit.nasm +; +; Abstract: +; +; Functions for relocating SMBASE's for all processors +; +;-------------------------------------------------------------------------= ------ + +%include "StuffRsbNasm.inc" + +global ASM_PFX(gcSmiIdtr) +global ASM_PFX(gcSmiGdtr) + +extern ASM_PFX(SmmInitHandler) +extern ASM_PFX(mRebasedFlag) +extern ASM_PFX(mSmmRelocationOriginalAddress) + +global ASM_PFX(gPatchSmmCr3) +global ASM_PFX(gPatchSmmCr4) +global ASM_PFX(gPatchSmmCr0) +global ASM_PFX(gPatchSmmInitStack) +global ASM_PFX(gcSmmInitSize) +global ASM_PFX(gcSmmInitTemplate) + +%define PROTECT_MODE_CS 0x8 +%define PROTECT_MODE_DS 0x20 + + SECTION .data + +NullSeg: DQ 0 ; reserved by architecture +CodeSeg32: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x9b + DB 0xcf ; LimitHigh + DB 0 ; BaseHigh +ProtModeCodeSeg32: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x9b + DB 0xcf ; LimitHigh + DB 0 ; BaseHigh +ProtModeSsSeg32: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x93 + DB 0xcf ; LimitHigh + DB 0 ; BaseHigh +DataSeg32: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x93 + DB 0xcf ; LimitHigh + DB 0 ; BaseHigh +CodeSeg16: + DW -1 + DW 0 + DB 0 + DB 0x9b + DB 0x8f + DB 0 +DataSeg16: + DW -1 + DW 0 + DB 0 + DB 0x93 + DB 0x8f + DB 0 +CodeSeg64: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x9b + DB 0xaf ; LimitHigh + DB 0 ; BaseHigh +GDT_SIZE equ $ - NullSeg + +ASM_PFX(gcSmiGdtr): + DW GDT_SIZE - 1 + DD NullSeg + +ASM_PFX(gcSmiIdtr): + DW 0 + DD 0 + + + SECTION .text + +global ASM_PFX(SmmStartup) + +BITS 16 +ASM_PFX(SmmStartup): + ;mov eax, 0x80000001 ; read capability + ;cpuid + ;mov ebx, edx ; rdmsr will change edx. keep it = in ebx. + ;and ebx, BIT20 ; extract NX capability bit + ;shr ebx, 9 ; shift bit to IA32_EFER.NXE[BIT1= 1] position + mov eax, strict dword 0 ; source operand will be patched +ASM_PFX(gPatchSmmCr3): + mov cr3, eax +o32 lgdt [cs:ebp + (ASM_PFX(gcSmiGdtr) - ASM_PFX(SmmStartup))] + mov eax, strict dword 0 ; source operand will be patched +ASM_PFX(gPatchSmmCr4): + mov cr4, eax + ;mov ecx, 0xc0000080 ; IA32_EFER MSR + ;rdmsr + ;or eax, ebx ; set NXE bit if NX is available + ;wrmsr + mov eax, strict dword 0 ; source operand will be patched +ASM_PFX(gPatchSmmCr0): + mov di, PROTECT_MODE_DS + mov cr0, eax + jmp PROTECT_MODE_CS : dword @32bit + +BITS 32 +@32bit: + mov ds, edi + mov es, edi + mov fs, edi + mov gs, edi + mov ss, edi + mov esp, strict dword 0 ; source operand will be patched +ASM_PFX(gPatchSmmInitStack): + call ASM_PFX(SmmInitHandler) + StuffRsb32 + rsm + +BITS 16 +ASM_PFX(gcSmmInitTemplate): + mov ebp, ASM_PFX(SmmStartup) + sub ebp, 0x30000 + jmp ebp + +ASM_PFX(gcSmmInitSize): DW $ - ASM_PFX(gcSmmInitTemplate) + +BITS 32 +global ASM_PFX(SmmRelocationSemaphoreComplete) +ASM_PFX(SmmRelocationSemaphoreComplete): + push eax + mov eax, [ASM_PFX(mRebasedFlag)] + mov byte [eax], 1 + pop eax + jmp [ASM_PFX(mSmmRelocationOriginalAddress)] + +global ASM_PFX(SmmInitFixupAddress) +ASM_PFX(SmmInitFixupAddress): + ret diff --git a/UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h= b/UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h new file mode 100644 index 0000000000..c8647fbfe7 --- /dev/null +++ b/UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h @@ -0,0 +1,141 @@ +/** @file + SMM Relocation Lib for each processor. + + This Lib produces the SMM_BASE_HOB in HOB database which tells + the PiSmmCpuDxeSmm driver (runs at a later phase) about the new + SMBASE for each processor. PiSmmCpuDxeSmm driver installs the + SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000 for processor + Index. + + Copyright (c) 2024, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#ifndef INTERNAL_SMM_RELOCATION_LIB_H_ +#define INTERNAL_SMM_RELOCATION_LIB_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +extern UINT64 *mSmBaseForAllCpus; +extern UINT8 mSmmSaveStateRegisterLma; + +extern IA32_DESCRIPTOR gcSmiGdtr; +extern IA32_DESCRIPTOR gcSmiIdtr; +extern CONST UINT16 gcSmmInitSize; +extern CONST UINT8 gcSmmInitTemplate[]; + +X86_ASSEMBLY_PATCH_LABEL gPatchSmmCr0; +X86_ASSEMBLY_PATCH_LABEL gPatchSmmCr3; +X86_ASSEMBLY_PATCH_LABEL gPatchSmmCr4; +X86_ASSEMBLY_PATCH_LABEL gPatchSmmInitStack; + +// +// The size 0x20 must be bigger than +// the size of template code of SmmInit. Currently, +// the size of SmmInit requires the 0x16 Bytes buffer +// at least. +// +#define BACK_BUF_SIZE 0x20 + +#define CR4_CET_ENABLE BIT23 + +// +// EFER register LMA bit +// +#define LMA BIT10 + +/** + This function configures the SmBase on the currently executing CPU. + + @param[in] CpuIndex The index of the CPU. + @param[in,out] CpuState Pointer to SMRAM Save State Map for = the + currently executing CPU. On out, SmB= ase is + updated to the new value. + +**/ +VOID +EFIAPI +ConfigureSmBase ( + IN UINTN CpuIndex, + IN OUT SMRAM_SAVE_STATE_MAP *CpuState + ); + +/** + Semaphore operation for all processor relocate SMMBase. +**/ +VOID +EFIAPI +SmmRelocationSemaphoreComplete ( + VOID + ); + +/** + Hook the code executed immediately after an RSM instruction on the curre= ntly + executing CPU. The mode of code executed immediately after RSM must be + detected, and the appropriate hook must be selected. Always clear the a= uto + HALT restart flag if it is set. + + @param[in] CpuIndex The processor index for the curr= ently + executing CPU. + @param[in,out] CpuState Pointer to SMRAM Save State Map = for the + currently executing CPU. + @param[in] NewInstructionPointer32 Instruction pointer to use if re= suming to + 32-bit mode from 64-bit SMM. + @param[in] NewInstructionPointer Instruction pointer to use if re= suming to + same mode as SMM. + + @retval The value of the original instruction pointer before it was hook= ed. + +**/ +UINT64 +EFIAPI +HookReturnFromSmm ( + IN UINTN CpuIndex, + IN OUT SMRAM_SAVE_STATE_MAP *CpuState, + IN UINT64 NewInstructionPointer32, + IN UINT64 NewInstructionPointer + ); + +/** + Hook return address of SMM Save State so that semaphore code + can be executed immediately after AP exits SMM to indicate to + the BSP that an AP has exited SMM after SMBASE relocation. + + @param[in] CpuIndex The processor index. + @param[in] RebasedFlag A pointer to a flag that is set to TRUE + immediately after AP exits SMM. + +**/ +VOID +SemaphoreHook ( + IN UINTN CpuIndex, + IN volatile BOOLEAN *RebasedFlag + ); + +/** + This function fixes up the address of the global variable or function + referred in SmmInit assembly files to be the absolute address. +**/ +VOID +EFIAPI +SmmInitFixupAddress ( + ); + +#endif diff --git a/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.c b/UefiC= puPkg/Library/SmmRelocationLib/SmmRelocationLib.c new file mode 100644 index 0000000000..38bad24e7a --- /dev/null +++ b/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.c @@ -0,0 +1,659 @@ +/** @file + SMM Relocation Lib for each processor. + + This Lib produces the SMM_BASE_HOB in HOB database which tells + the PiSmmCpuDxeSmm driver (runs at a later phase) about the new + SMBASE for each processor. PiSmmCpuDxeSmm driver installs the + SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000 for processor + Index. + + Copyright (c) 2024, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ +#include "InternalSmmRelocationLib.h" + +UINTN mMaxNumberOfCpus =3D 1; +UINTN mNumberOfCpus =3D 1; +UINT64 *mSmBaseForAllCpus =3D NULL; + +// +// The mode of the CPU at the time an SMI occurs +// +UINT8 mSmmSaveStateRegisterLma; + +// +// Record all Processors Info +// +EFI_PROCESSOR_INFORMATION *mProcessorInfo =3D NULL; + +// +// SmBase Rebased or not +// +volatile BOOLEAN *mRebased; + +/** + C function for SMI handler. To change all processor's SMMBase Register. + +**/ +VOID +EFIAPI +SmmInitHandler ( + VOID + ) +{ + UINT32 ApicId; + UINTN Index; + + SMRAM_SAVE_STATE_MAP *CpuState; + + // + // Update SMM IDT entries' code segment and load IDT + // + AsmWriteIdtr (&gcSmiIdtr); + ApicId =3D GetApicId (); + + ASSERT (mNumberOfCpus <=3D mMaxNumberOfCpus); + + for (Index =3D 0; Index < mNumberOfCpus; Index++) { + if (ApicId =3D=3D (UINT32)mProcessorInfo[Index].ProcessorId) { + // + // Configure SmBase. + // + CpuState =3D (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + SM= RAM_SAVE_STATE_MAP_OFFSET); + ConfigureSmBase (Index, CpuState); + + // + // Hook return after RSM to set SMM re-based flag + // SMM re-based flag can't be set before RSM, because SMM save state= context might be override + // by next AP flow before it take effect. + // + SemaphoreHook (Index, &mRebased[Index]); + return; + } + } + + ASSERT (FALSE); +} + +/** + This routine will split SmramReserve HOB to reserve SmmRelocationSize fo= r Smm relocated memory. + + @param[in] SmmRelocationSize SmmRelocationSize for all processor= s. + @param[in,out] SmmRelocationStart Return the start address of Smm rel= ocated memory in SMRAM. + + @retval EFI_SUCCESS The gEfiSmmSmramMemoryGuid is split succes= sfully. + @retval EFI_DEVICE_ERROR Failed to build new HOB for gEfiSmmSmramMe= moryGuid. + @retval EFI_NOT_FOUND The gEfiSmmSmramMemoryGuid is not found. + +**/ +EFI_STATUS +SplitSmramHobForSmmRelocation ( + IN UINT64 SmmRelocationSize, + IN OUT EFI_PHYSICAL_ADDRESS *SmmRelocationStart + ) +{ + EFI_HOB_GUID_TYPE *GuidHob; + EFI_SMRAM_HOB_DESCRIPTOR_BLOCK *DescriptorBlock; + EFI_SMRAM_HOB_DESCRIPTOR_BLOCK *NewDescriptorBlock; + UINTN BufferSize; + UINTN SmramRanges; + + NewDescriptorBlock =3D NULL; + + // + // Retrieve the GUID HOB data that contains the set of SMRAM descriptors + // + GuidHob =3D GetFirstGuidHob (&gEfiSmmSmramMemoryGuid); + if (GuidHob =3D=3D NULL) { + return EFI_NOT_FOUND; + } + + DescriptorBlock =3D (EFI_SMRAM_HOB_DESCRIPTOR_BLOCK *)GET_GUID_HOB_DATA = (GuidHob); + + // + // Allocate one extra EFI_SMRAM_DESCRIPTOR to describe SMRAM memory that= contains a pointer + // to the Smm relocated memory. + // + SmramRanges =3D DescriptorBlock->NumberOfSmmReservedRegions; + BufferSize =3D sizeof (EFI_SMRAM_HOB_DESCRIPTOR_BLOCK) + (SmramRanges *= sizeof (EFI_SMRAM_DESCRIPTOR)); + + NewDescriptorBlock =3D (EFI_SMRAM_HOB_DESCRIPTOR_BLOCK *)BuildGuidHob ( + &gEfiSmmSmramMe= moryGuid, + BufferSize + ); + ASSERT (NewDescriptorBlock !=3D NULL); + if (NewDescriptorBlock =3D=3D NULL) { + return EFI_DEVICE_ERROR; + } + + // + // Copy old EFI_SMRAM_HOB_DESCRIPTOR_BLOCK to new allocated region + // + CopyMem ((VOID *)NewDescriptorBlock, DescriptorBlock, BufferSize - sizeo= f (EFI_SMRAM_DESCRIPTOR)); + + // + // Increase the number of SMRAM descriptors by 1 to make room for the AL= LOCATED descriptor of size EFI_PAGE_SIZE + // + NewDescriptorBlock->NumberOfSmmReservedRegions =3D (UINT32)(SmramRanges = + 1); + + ASSERT (SmramRanges >=3D 1); + // + // Copy last entry to the end - we assume TSEG is last entry. + // + CopyMem (&NewDescriptorBlock->Descriptor[SmramRanges], &NewDescriptorBlo= ck->Descriptor[SmramRanges - 1], sizeof (EFI_SMRAM_DESCRIPTOR)); + + // + // Update the entry in the array with a size of SmmRelocationSize and pu= t into the ALLOCATED state + // + NewDescriptorBlock->Descriptor[SmramRanges - 1].PhysicalSize =3D SmmRelo= cationSize; + NewDescriptorBlock->Descriptor[SmramRanges - 1].RegionState |=3D EFI_ALL= OCATED; + + // + // Return the start address of Smm relocated memory in SMRAM. + // + if (SmmRelocationStart !=3D NULL) { + *SmmRelocationStart =3D NewDescriptorBlock->Descriptor[SmramRanges - 1= ].CpuStart; + } + + // + // Reduce the size of the last SMRAM descriptor by SmmRelocationSize + // + NewDescriptorBlock->Descriptor[SmramRanges].PhysicalStart +=3D SmmReloca= tionSize; + NewDescriptorBlock->Descriptor[SmramRanges].CpuStart +=3D SmmReloca= tionSize; + NewDescriptorBlock->Descriptor[SmramRanges].PhysicalSize -=3D SmmReloca= tionSize; + + // + // Last step, we can scrub old one + // + ZeroMem (&GuidHob->Name, sizeof (GuidHob->Name)); + + return EFI_SUCCESS; +} + +/** + This function will create SmBase for all CPUs. + + @param[in] SmBaseForAllCpus Pointer to SmBase for all CPUs. + + @retval EFI_SUCCESS Create SmBase for all CPUs successfully. + @retval Others Failed to create SmBase for all CPUs. + +**/ +EFI_STATUS +CreateSmmBaseHob ( + IN UINT64 *SmBaseForAllCpus + ) +{ + UINTN Index; + SMM_BASE_HOB_DATA *SmmBaseHobData; + UINT32 CpuCount; + UINT32 NumberOfProcessorsInHob; + UINT32 MaxCapOfProcessorsInHob; + UINT32 HobCount; + + SmmBaseHobData =3D NULL; + CpuCount =3D 0; + NumberOfProcessorsInHob =3D 0; + MaxCapOfProcessorsInHob =3D 0; + HobCount =3D 0; + + // + // Count the HOB instance maximum capacity of CPU (MaxCapOfProcessorsInH= ob) since the max HobLength is 0xFFF8. + // + MaxCapOfProcessorsInHob =3D (0xFFF8 - sizeof (EFI_HOB_GUID_TYPE) - sizeo= f (SMM_BASE_HOB_DATA)) / sizeof (UINT64) + 1; + DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - MaxCapOfProcessorsInHob: %03x\n"= , MaxCapOfProcessorsInHob)); + + // + // Create Guided SMM Base HOB Instances. + // + while (CpuCount !=3D mMaxNumberOfCpus) { + NumberOfProcessorsInHob =3D MIN ((UINT32)mMaxNumberOfCpus - CpuCount, = MaxCapOfProcessorsInHob); + + SmmBaseHobData =3D BuildGuidHob ( + &gSmmBaseHobGuid, + sizeof (SMM_BASE_HOB_DATA) + sizeof (UINT64) * Numb= erOfProcessorsInHob + ); + if (SmmBaseHobData =3D=3D NULL) { + return EFI_OUT_OF_RESOURCES; + } + + SmmBaseHobData->ProcessorIndex =3D CpuCount; + SmmBaseHobData->NumberOfProcessors =3D NumberOfProcessorsInHob; + + DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - SmmBaseHobData[%d]->ProcessorI= ndex: %03x\n", HobCount, SmmBaseHobData->ProcessorIndex)); + DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - SmmBaseHobData[%d]->NumberOfPr= ocessors: %03x\n", HobCount, SmmBaseHobData->NumberOfProcessors)); + for (Index =3D 0; Index < SmmBaseHobData->NumberOfProcessors; Index++)= { + // + // Calculate the new SMBASE address + // + SmmBaseHobData->SmBase[Index] =3D SmBaseForAllCpus[Index + CpuCount]= ; + DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - SmmBaseHobData[%d]->SmBase[%= 03x]: %08x\n", HobCount, Index, SmmBaseHobData->SmBase[Index])); + } + + CpuCount +=3D NumberOfProcessorsInHob; + HobCount++; + SmmBaseHobData =3D NULL; + } + + return EFI_SUCCESS; +} + +/** + Relocate SmmBases for each processor. + Execute on first boot and all S3 resumes + +**/ +VOID +SmmRelocateBases ( + VOID + ) +{ + UINT8 BakBuf[BACK_BUF_SIZE]; + SMRAM_SAVE_STATE_MAP BakBuf2; + SMRAM_SAVE_STATE_MAP *CpuStatePtr; + UINT8 *U8Ptr; + UINTN Index; + UINTN BspIndex; + UINT32 BspApicId; + + // + // Make sure the reserved size is large enough for procedure SmmInitTemp= late. + // + ASSERT (sizeof (BakBuf) >=3D gcSmmInitSize); + + // + // Patch ASM code template with current CR0, CR3, and CR4 values + // + PatchInstructionX86 (gPatchSmmCr0, AsmReadCr0 (), 4); + PatchInstructionX86 (gPatchSmmCr3, AsmReadCr3 (), 4); + PatchInstructionX86 (gPatchSmmCr4, AsmReadCr4 () & (~CR4_CET_ENABLE), 4)= ; + + U8Ptr =3D (UINT8 *)(UINTN)(SMM_DEFAULT_SMBASE + SMM_HANDLER_OFFSET= ); + CpuStatePtr =3D (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + SMR= AM_SAVE_STATE_MAP_OFFSET); + + // + // Backup original contents at address 0x38000 + // + CopyMem (BakBuf, U8Ptr, sizeof (BakBuf)); + CopyMem (&BakBuf2, CpuStatePtr, sizeof (BakBuf2)); + + // + // Load image for relocation + // + CopyMem (U8Ptr, gcSmmInitTemplate, gcSmmInitSize); + + // + // Retrieve the local APIC ID of current processor + // + BspApicId =3D GetApicId (); + + // + // Relocate SM bases for all APs + // This is APs' 1st SMI - rebase will be done here, and APs' default SMI= handler will be overridden by gcSmmInitTemplate + // + BspIndex =3D (UINTN)-1; + for (Index =3D 0; Index < mNumberOfCpus; Index++) { + mRebased[Index] =3D FALSE; + if (BspApicId !=3D (UINT32)mProcessorInfo[Index].ProcessorId) { + SendSmiIpi ((UINT32)mProcessorInfo[Index].ProcessorId); + // + // Wait for this AP to finish its 1st SMI + // + while (!mRebased[Index]) { + } + } else { + // + // BSP will be Relocated later + // + BspIndex =3D Index; + } + } + + // + // Relocate BSP's SMM base + // + ASSERT (BspIndex !=3D (UINTN)-1); + SendSmiIpi (BspApicId); + + // + // Wait for the BSP to finish its 1st SMI + // + while (!mRebased[BspIndex]) { + } + + // + // Restore contents at address 0x38000 + // + CopyMem (CpuStatePtr, &BakBuf2, sizeof (BakBuf2)); + CopyMem (U8Ptr, BakBuf, sizeof (BakBuf)); +} + +/** + This function will initialize SmBase for all CPUs. + + @param[in,out] SmBaseForAllCpus Pointer to SmBase for all CPUs. + + @retval EFI_SUCCESS Initialize SmBase for all CPUs successfull= y. + @retval Others Failed to initialize SmBase for all CPUs. + +**/ +EFI_STATUS +InitSmBaseForAllCpus ( + IN OUT UINT64 **SmBaseForAllCpus + ) +{ + EFI_STATUS Status; + UINTN TileSize; + UINT64 SmmRelocationSize; + EFI_PHYSICAL_ADDRESS SmmRelocationStart; + UINTN Index; + + SmmRelocationStart =3D 0; + + ASSERT (SmBaseForAllCpus !=3D NULL); + + // + // Calculate SmmRelocationSize for all of the tiles. + // + // The CPU save state and code for the SMI entry point are tiled within = an SMRAM + // allocated buffer. The minimum size of this buffer for a uniprocessor = system + // is 32 KB, because the entry point is SMBASE + 32KB, and CPU save stat= e area + // just below SMBASE + 64KB. If more than one CPU is present in the plat= form, + // then the SMI entry point and the CPU save state areas can be tiles to= minimize + // the total amount SMRAM required for all the CPUs. The tile size can b= e computed + // by adding the CPU save state size, any extra CPU specific context, an= d + // the size of code that must be placed at the SMI entry point to transf= er + // control to a C function in the native SMM execution mode. This size i= s + // rounded up to the nearest power of 2 to give the tile size for a each= CPU. + // The total amount of memory required is the maximum number of CPUs tha= t + // platform supports times the tile size. + // + TileSize =3D SIZE_8KB; + SmmRelocationSize =3D EFI_PAGES_TO_SIZE (EFI_SIZE_TO_PAGES (SIZE_32KB + = TileSize * (mMaxNumberOfCpus - 1))); + + // + // Split SmramReserve HOB to reserve SmmRelocationSize for Smm relocated= memory + // + Status =3D SplitSmramHobForSmmRelocation ( + SmmRelocationSize, + &SmmRelocationStart + ); + if (EFI_ERROR (Status)) { + return Status; + } + + ASSERT (SmmRelocationStart !=3D 0); + DEBUG ((DEBUG_INFO, "InitSmBaseForAllCpus - SmmRelocationSize: 0x%08x\n"= , SmmRelocationSize)); + DEBUG ((DEBUG_INFO, "InitSmBaseForAllCpus - SmmRelocationStart: 0x%08x\n= ", SmmRelocationStart)); + + // + // Init SmBaseForAllCpus + // + *SmBaseForAllCpus =3D (UINT64 *)AllocatePages (EFI_SIZE_TO_PAGES (sizeof= (UINT64) * mMaxNumberOfCpus)); + if (*SmBaseForAllCpus =3D=3D NULL) { + return EFI_OUT_OF_RESOURCES; + } + + for (Index =3D 0; Index < mMaxNumberOfCpus; Index++) { + // + // Return each SmBase in SmBaseForAllCpus + // + (*SmBaseForAllCpus)[Index] =3D (UINTN)(SmmRelocationStart)+ Index * Ti= leSize - SMM_HANDLER_OFFSET; + DEBUG ((DEBUG_INFO, "InitSmBaseForAllCpus - SmBase For CPU[%03x]: %08x= \n", Index, (*SmBaseForAllCpus)[Index])); + } + + return EFI_SUCCESS; +} + +/** + Initialize IDT to setup exception handlers in SMM. + +**/ +VOID +InitSmmIdt ( + VOID + ) +{ + EFI_STATUS Status; + BOOLEAN InterruptState; + IA32_DESCRIPTOR PeiIdtr; + CONST EFI_PEI_SERVICES **PeiServices; + + // + // There are 32 (not 255) entries in it since only processor + // generated exceptions will be handled. + // + gcSmiIdtr.Limit =3D (sizeof (IA32_IDT_GATE_DESCRIPTOR) * 32) - 1; + + // + // Allocate for IDT. + // sizeof (UINTN) is for the PEI Services Table pointer. + // + gcSmiIdtr.Base =3D (UINTN)AllocateZeroPool (gcSmiIdtr.Limit + 1 + sizeof= (UINTN)); + ASSERT (gcSmiIdtr.Base !=3D 0); + gcSmiIdtr.Base +=3D sizeof (UINTN); + + // + // Disable Interrupt, save InterruptState and save PEI IDT table + // + InterruptState =3D SaveAndDisableInterrupts (); + AsmReadIdtr (&PeiIdtr); + + // + // Save the PEI Services Table pointer + // The PEI Services Table pointer will be stored in the sizeof (UINTN) b= ytes + // immediately preceding the IDT in memory. + // + PeiServices =3D (CONST EFI_PEI_SERVICE= S **)(*(UINTN *)(PeiIdtr.Base - sizeof (UINTN))); + (*(UINTN *)(gcSmiIdtr.Base - sizeof (UINTN))) =3D (UINTN)PeiServices; + + // + // Load SMM temporary IDT table + // + AsmWriteIdtr (&gcSmiIdtr); + + // + // Setup SMM default exception handlers, SMM IDT table + // will be updated and saved in gcSmiIdtr + // + Status =3D InitializeCpuExceptionHandlers (NULL); + ASSERT_EFI_ERROR (Status); + + // + // Restore PEI IDT table and CPU InterruptState + // + AsmWriteIdtr ((IA32_DESCRIPTOR *)&PeiIdtr); + SetInterruptState (InterruptState); +} + +/** + Determine the mode of the CPU at the time an SMI occurs + + @retval EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT 32 bit. + @retval EFI_MM_SAVE_STATE_REGISTER_LMA_64BIT 64 bit. + +**/ +UINT8 +CheckSmmCpuMode ( + VOID + ) +{ + UINT32 RegEax; + UINT32 RegEdx; + UINTN FamilyId; + UINTN ModelId; + UINT8 SmmSaveStateRegisterLma; + + // + // Determine the mode of the CPU at the time an SMI occurs + // Intel(R) 64 and IA-32 Architectures Software Developer's Manual + // Volume 3C, Section 34.4.1.1 + // + AsmCpuid (CPUID_VERSION_INFO, &RegEax, NULL, NULL, NULL); + FamilyId =3D (RegEax >> 8) & 0xf; + ModelId =3D (RegEax >> 4) & 0xf; + if ((FamilyId =3D=3D 0x06) || (FamilyId =3D=3D 0x0f)) { + ModelId =3D ModelId | ((RegEax >> 12) & 0xf0); + } + + RegEdx =3D 0; + AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL); + if (RegEax >=3D CPUID_EXTENDED_CPU_SIG) { + AsmCpuid (CPUID_EXTENDED_CPU_SIG, NULL, NULL, NULL, &RegEdx); + } + + SmmSaveStateRegisterLma =3D EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT; + if ((RegEdx & BIT29) !=3D 0) { + SmmSaveStateRegisterLma =3D EFI_MM_SAVE_STATE_REGISTER_LMA_64BIT; + } + + if (FamilyId =3D=3D 0x06) { + if ((ModelId =3D=3D 0x17) || (ModelId =3D=3D 0x0f) || (ModelId =3D=3D = 0x1c)) { + SmmSaveStateRegisterLma =3D EFI_MM_SAVE_STATE_REGISTER_LMA_64BIT; + } + } + + return SmmSaveStateRegisterLma; +} + +/** + CPU SmmBase Relocation Init. + + This function is to relocate CPU SmmBase. + + @param[in] MpServices2 Pointer to this instance of the MpServices= . + + @retval EFI_SUCCESS CPU SmmBase Relocated successfully. + @retval Others CPU SmmBase Relocation failed. + +**/ +EFI_STATUS +EFIAPI +SmmRelocationInit ( + IN EDKII_PEI_MP_SERVICES2_PPI *MpServices2 + ) +{ + EFI_STATUS Status; + UINTN NumberOfEnabledCpus; + UINTN SmmStackSize; + UINT8 *SmmStacks; + UINTN Index; + + SmmStacks =3D NULL; + + DEBUG ((DEBUG_INFO, "SmmRelocationInit Start \n")); + if (MpServices2 =3D=3D NULL) { + return EFI_INVALID_PARAMETER; + } + + // + // Fix up the address of the global variable or function referred in + // SmmInit assembly files to be the absolute address + // + SmmInitFixupAddress (); + + // + // Check the mode of the CPU at the time an SMI occurs + // + mSmmSaveStateRegisterLma =3D CheckSmmCpuMode (); + + // + // Patch SMI stack for SMM base relocation + // Note: No need allocate stack for all CPUs since the relocation + // occurs serially for each CPU + // + SmmStackSize =3D EFI_PAGES_TO_SIZE (EFI_SIZE_TO_PAGES (PcdGet32 (PcdCpuS= mmStackSize))); + SmmStacks =3D (UINT8 *)AllocatePages (EFI_SIZE_TO_PAGES (SmmStackSize= )); + if (SmmStacks =3D=3D NULL) { + Status =3D EFI_OUT_OF_RESOURCES; + goto ON_EXIT; + } + + DEBUG ((DEBUG_INFO, "SmmRelocationInit - SmmStacks: 0x%x\n", SmmStacks))= ; + DEBUG ((DEBUG_INFO, "SmmRelocationInit - SmmStackSize: 0x%x\n", SmmStack= Size)); + + PatchInstructionX86 ( + gPatchSmmInitStack, + (UINTN)(SmmStacks + SmmStackSize - sizeof (UINTN)), + sizeof (UINTN) + ); + + // + // Initialize the SMM IDT for SMM base relocation + // + InitSmmIdt (); + + // + // Get the number of processors + // + Status =3D MpServices2->GetNumberOfProcessors ( + MpServices2, + &mNumberOfCpus, + &NumberOfEnabledCpus + ); + if (EFI_ERROR (Status)) { + return Status; + } + + if (FeaturePcdGet (PcdCpuHotPlugSupport)) { + mMaxNumberOfCpus =3D PcdGet32 (PcdCpuMaxLogicalProcessorNumber); + } else { + mMaxNumberOfCpus =3D mNumberOfCpus; + } + + // + // Retrieve the Processor Info for all CPUs + // + mProcessorInfo =3D (EFI_PROCESSOR_INFORMATION *)AllocatePool (sizeof (EF= I_PROCESSOR_INFORMATION) * mMaxNumberOfCpus); + if (mProcessorInfo =3D=3D NULL) { + Status =3D EFI_OUT_OF_RESOURCES; + goto ON_EXIT; + } + + for (Index =3D 0; Index < mMaxNumberOfCpus; Index++) { + if (Index < mNumberOfCpus) { + Status =3D MpServices2->GetProcessorInfo (MpServices2, Index | CPU_V= 2_EXTENDED_TOPOLOGY, &mProcessorInfo[Index]); + if (EFI_ERROR (Status)) { + goto ON_EXIT; + } + } + } + + // + // Initialize the SmBase for all CPUs + // + Status =3D InitSmBaseForAllCpus (&mSmBaseForAllCpus); + if (EFI_ERROR (Status)) { + goto ON_EXIT; + } + + // + // Relocate SmmBases for each processor. + // Allocate mRebased as the flag to indicate the relocation is done for = each CPU. + // + mRebased =3D (BOOLEAN *)AllocateZeroPool (sizeof (BOOLEAN) * mMaxNumberO= fCpus); + if (mRebased =3D=3D NULL) { + Status =3D EFI_OUT_OF_RESOURCES; + goto ON_EXIT; + } + + SmmRelocateBases (); + + // + // Create the SmBase HOB for all CPUs + // + Status =3D CreateSmmBaseHob (mSmBaseForAllCpus); + +ON_EXIT: + if (SmmStacks !=3D NULL) { + FreePages (SmmStacks, EFI_SIZE_TO_PAGES (SmmStackSize)); + } + + if (mSmBaseForAllCpus !=3D NULL) { + FreePages (mSmBaseForAllCpus, EFI_SIZE_TO_PAGES (sizeof (UINT64) * mMa= xNumberOfCpus)); + } + + DEBUG ((DEBUG_INFO, "SmmRelocationInit Done\n")); + return Status; +} diff --git a/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf b/Uef= iCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf new file mode 100644 index 0000000000..2ac16ab5d1 --- /dev/null +++ b/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf @@ -0,0 +1,61 @@ +## @file +# SMM Relocation Lib for each processor. +# +# This Lib produces the SMM_BASE_HOB in HOB database which tells +# the PiSmmCpuDxeSmm driver (runs at a later phase) about the new +# SMBASE for each processor. PiSmmCpuDxeSmm driver installs the +# SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000 for processor +# Index. +# +# Copyright (c) 2024, Intel Corporation. All rights reserved.
+# SPDX-License-Identifier: BSD-2-Clause-Patent +# +## + +[Defines] + INF_VERSION =3D 0x00010005 + BASE_NAME =3D SmmRelocationLib + FILE_GUID =3D 853E97B3-790C-4EA3-945C-8F622FC47FE8 + MODULE_TYPE =3D PEIM + VERSION_STRING =3D 1.0 + LIBRARY_CLASS =3D SmmRelocationLib + +[Sources] + InternalSmmRelocationLib.h + SmramSaveStateConfig.c + SmmRelocationLib.c + +[Sources.Ia32] + Ia32/Semaphore.c + Ia32/SmmInit.nasm + +[Sources.X64] + X64/Semaphore.c + X64/SmmInit.nasm + +[Packages] + MdePkg/MdePkg.dec + MdeModulePkg/MdeModulePkg.dec + UefiCpuPkg/UefiCpuPkg.dec + +[LibraryClasses] + BaseLib + BaseMemoryLib + CpuExceptionHandlerLib + DebugLib + HobLib + LocalApicLib + MemoryAllocationLib + PcdLib + PeiServicesLib + +[Guids] + gSmmBaseHobGuid ## HOB ALWAYS_PRODUCED + gEfiSmmSmramMemoryGuid ## CONSUMES + +[Pcd] + gUefiCpuPkgTokenSpaceGuid.PcdCpuMaxLogicalProcessorNumber + gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStackSize ## CONS= UMES + +[FeaturePcd] + gUefiCpuPkgTokenSpaceGuid.PcdCpuHotPlugSupport ##= CONSUMES diff --git a/UefiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c b/U= efiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c new file mode 100644 index 0000000000..3982158979 --- /dev/null +++ b/UefiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c @@ -0,0 +1,91 @@ +/** @file + Config SMRAM Save State for SmmBases Relocation. + + Copyright (c) 2024, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ +#include "InternalSmmRelocationLib.h" + +/** + This function configures the SmBase on the currently executing CPU. + + @param[in] CpuIndex The index of the CPU. + @param[in,out] CpuState Pointer to SMRAM Save State Map for = the + currently executing CPU. On out, SmB= ase is + updated to the new value. + +**/ +VOID +EFIAPI +ConfigureSmBase ( + IN UINTN CpuIndex, + IN OUT SMRAM_SAVE_STATE_MAP *CpuState + ) +{ + if (mSmmSaveStateRegisterLma =3D=3D EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT= ) { + CpuState->x86.SMBASE =3D (UINT32)mSmBaseForAllCpus[CpuIndex]; + } else { + CpuState->x64.SMBASE =3D (UINT32)mSmBaseForAllCpus[CpuIndex]; + } +} + +/** + Hook the code executed immediately after an RSM instruction on the curre= ntly + executing CPU. The mode of code executed immediately after RSM must be + detected, and the appropriate hook must be selected. Always clear the a= uto + HALT restart flag if it is set. + + @param[in] CpuIndex The processor index for the curr= ently + executing CPU. + @param[in,out] CpuState Pointer to SMRAM Save State Map = for the + currently executing CPU. + @param[in] NewInstructionPointer32 Instruction pointer to use if re= suming to + 32-bit mode from 64-bit SMM. + @param[in] NewInstructionPointer Instruction pointer to use if re= suming to + same mode as SMM. + + @retval The value of the original instruction pointer before it was hook= ed. + +**/ +UINT64 +EFIAPI +HookReturnFromSmm ( + IN UINTN CpuIndex, + IN OUT SMRAM_SAVE_STATE_MAP *CpuState, + IN UINT64 NewInstructionPointer32, + IN UINT64 NewInstructionPointer + ) +{ + UINT64 OriginalInstructionPointer; + + if (mSmmSaveStateRegisterLma =3D=3D EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT= ) { + OriginalInstructionPointer =3D (UINT64)CpuState->x86._EIP; + CpuState->x86._EIP =3D (UINT32)NewInstructionPointer; + + // + // Clear the auto HALT restart flag so the RSM instruction returns + // program control to the instruction following the HLT instruction. + // + if ((CpuState->x86.AutoHALTRestart & BIT0) !=3D 0) { + CpuState->x86.AutoHALTRestart &=3D ~BIT0; + } + } else { + OriginalInstructionPointer =3D CpuState->x64._RIP; + if ((CpuState->x64.IA32_EFER & LMA) =3D=3D 0) { + CpuState->x64._RIP =3D (UINT32)NewInstructionPointer32; + } else { + CpuState->x64._RIP =3D (UINT32)NewInstructionPointer; + } + + // + // Clear the auto HALT restart flag so the RSM instruction returns + // program control to the instruction following the HLT instruction. + // + if ((CpuState->x64.AutoHALTRestart & BIT0) !=3D 0) { + CpuState->x64.AutoHALTRestart &=3D ~BIT0; + } + } + + return OriginalInstructionPointer; +} diff --git a/UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.c b/UefiCpuP= kg/Library/SmmRelocationLib/X64/Semaphore.c new file mode 100644 index 0000000000..54d3462ef8 --- /dev/null +++ b/UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.c @@ -0,0 +1,70 @@ +/** @file + Semaphore mechanism to indicate to the BSP that an AP has exited SMM + after SMBASE relocation. + + Copyright (c) 2024, Intel Corporation. All rights reserved.
+ SPDX-License-Identifier: BSD-2-Clause-Patent + +**/ + +#include "InternalSmmRelocationLib.h" + +X86_ASSEMBLY_PATCH_LABEL gPatchSmmRelocationOriginalAddressPtr32; +X86_ASSEMBLY_PATCH_LABEL gPatchRebasedFlagAddr32; + +UINTN mSmmRelocationOriginalAddress; +volatile BOOLEAN *mRebasedFlag; + +/** +AP Semaphore operation in 32-bit mode while BSP runs in 64-bit mode. +**/ +VOID +SmmRelocationSemaphoreComplete32 ( + VOID + ); + +/** + Hook return address of SMM Save State so that semaphore code + can be executed immediately after AP exits SMM to indicate to + the BSP that an AP has exited SMM after SMBASE relocation. + + @param[in] CpuIndex The processor index. + @param[in] RebasedFlag A pointer to a flag that is set to TRUE + immediately after AP exits SMM. + +**/ +VOID +SemaphoreHook ( + IN UINTN CpuIndex, + IN volatile BOOLEAN *RebasedFlag + ) +{ + SMRAM_SAVE_STATE_MAP *CpuState; + UINTN TempValue; + + mRebasedFlag =3D RebasedFlag; + PatchInstructionX86 ( + gPatchRebasedFlagAddr32, + (UINT32)(UINTN)mRebasedFlag, + 4 + ); + + CpuState =3D (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + SMRAM_= SAVE_STATE_MAP_OFFSET); + + mSmmRelocationOriginalAddress =3D HookReturnFromSmm ( + CpuIndex, + CpuState, + (UINT64)(UINTN)&SmmRelocationSemaphore= Complete32, + (UINT64)(UINTN)&SmmRelocationSemaphore= Complete + ); + + // + // Use temp value to fix ICC compiler warning + // + TempValue =3D (UINTN)&mSmmRelocationOriginalAddress; + PatchInstructionX86 ( + gPatchSmmRelocationOriginalAddressPtr32, + (UINT32)TempValue, + 4 + ); +} diff --git a/UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.nasm b/UefiCpu= Pkg/Library/SmmRelocationLib/X64/SmmInit.nasm new file mode 100644 index 0000000000..ce4311fffd --- /dev/null +++ b/UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.nasm @@ -0,0 +1,207 @@ +;-------------------------------------------------------------------------= ----- ; +; Copyright (c) 2024, Intel Corporation. All rights reserved.
+; SPDX-License-Identifier: BSD-2-Clause-Patent +; +; Module Name: +; +; SmmInit.nasm +; +; Abstract: +; +; Functions for relocating SMBASE's for all processors +; +;-------------------------------------------------------------------------= ------ + +%include "StuffRsbNasm.inc" + +global ASM_PFX(gcSmiIdtr) +global ASM_PFX(gcSmiGdtr) + +extern ASM_PFX(SmmInitHandler) +extern ASM_PFX(mRebasedFlag) +extern ASM_PFX(mSmmRelocationOriginalAddress) + +global ASM_PFX(gPatchSmmCr3) +global ASM_PFX(gPatchSmmCr4) +global ASM_PFX(gPatchSmmCr0) +global ASM_PFX(gPatchSmmInitStack) +global ASM_PFX(gcSmmInitSize) +global ASM_PFX(gcSmmInitTemplate) +global ASM_PFX(gPatchRebasedFlagAddr32) +global ASM_PFX(gPatchSmmRelocationOriginalAddressPtr32) + +%define LONG_MODE_CS 0x38 + + SECTION .data + +NullSeg: DQ 0 ; reserved by architecture +CodeSeg32: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x9b + DB 0xcf ; LimitHigh + DB 0 ; BaseHigh +ProtModeCodeSeg32: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x9b + DB 0xcf ; LimitHigh + DB 0 ; BaseHigh +ProtModeSsSeg32: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x93 + DB 0xcf ; LimitHigh + DB 0 ; BaseHigh +DataSeg32: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x93 + DB 0xcf ; LimitHigh + DB 0 ; BaseHigh +CodeSeg16: + DW -1 + DW 0 + DB 0 + DB 0x9b + DB 0x8f + DB 0 +DataSeg16: + DW -1 + DW 0 + DB 0 + DB 0x93 + DB 0x8f + DB 0 +CodeSeg64: + DW -1 ; LimitLow + DW 0 ; BaseLow + DB 0 ; BaseMid + DB 0x9b + DB 0xaf ; LimitHigh + DB 0 ; BaseHigh +GDT_SIZE equ $ - NullSeg + +ASM_PFX(gcSmiGdtr): + DW GDT_SIZE - 1 + DQ NullSeg + +ASM_PFX(gcSmiIdtr): + DW 0 + DQ 0 + + + DEFAULT REL + SECTION .text + +global ASM_PFX(SmmStartup) + +BITS 16 +ASM_PFX(SmmStartup): + ;mov eax, 0x80000001 ; read capability + ;cpuid + ;mov ebx, edx ; rdmsr will change edx. keep it = in ebx. + mov eax, strict dword 0 ; source operand will be patched +ASM_PFX(gPatchSmmCr3): + mov cr3, eax +o32 lgdt [cs:ebp + (ASM_PFX(gcSmiGdtr) - ASM_PFX(SmmStartup))] + mov eax, strict dword 0 ; source operand will be patched +ASM_PFX(gPatchSmmCr4): + or ah, 2 ; enable XMM registers access + mov cr4, eax + mov ecx, 0xc0000080 ; IA32_EFER MSR + rdmsr + or ah, BIT0 ; set LME bit + ;test ebx, BIT20 ; check NXE capability + ;jz .1 + ;or ah, BIT3 ; set NXE bit +;.1: + wrmsr + mov eax, strict dword 0 ; source operand will be patched +ASM_PFX(gPatchSmmCr0): + mov cr0, eax ; enable protected mode & paging + jmp LONG_MODE_CS : dword 0 ; offset will be patched to @LongM= ode +@PatchLongModeOffset: + +BITS 64 +@LongMode: ; long-mode starts here + mov rsp, strict qword 0 ; source operand will be patched +ASM_PFX(gPatchSmmInitStack): + and sp, 0xfff0 ; make sure RSP is 16-byte aligned + ; + ; According to X64 calling convention, XMM0~5 are volatile, we need to= save + ; them before calling C-function. + ; + sub rsp, 0x60 + movdqa [rsp], xmm0 + movdqa [rsp + 0x10], xmm1 + movdqa [rsp + 0x20], xmm2 + movdqa [rsp + 0x30], xmm3 + movdqa [rsp + 0x40], xmm4 + movdqa [rsp + 0x50], xmm5 + + add rsp, -0x20 + call ASM_PFX(SmmInitHandler) + add rsp, 0x20 + + ; + ; Restore XMM0~5 after calling C-function. + ; + movdqa xmm0, [rsp] + movdqa xmm1, [rsp + 0x10] + movdqa xmm2, [rsp + 0x20] + movdqa xmm3, [rsp + 0x30] + movdqa xmm4, [rsp + 0x40] + movdqa xmm5, [rsp + 0x50] + + StuffRsb64 + rsm + +BITS 16 +ASM_PFX(gcSmmInitTemplate): + mov ebp, [cs:@L1 - ASM_PFX(gcSmmInitTemplate) + 0x8000] + sub ebp, 0x30000 + jmp ebp +@L1: + DQ 0; ASM_PFX(SmmStartup) + +ASM_PFX(gcSmmInitSize): DW $ - ASM_PFX(gcSmmInitTemplate) + +BITS 64 +global ASM_PFX(SmmRelocationSemaphoreComplete) +ASM_PFX(SmmRelocationSemaphoreComplete): + push rax + mov rax, [ASM_PFX(mRebasedFlag)] + mov byte [rax], 1 + pop rax + jmp [ASM_PFX(mSmmRelocationOriginalAddress)] + +; +; Semaphore code running in 32-bit mode +; +BITS 32 +global ASM_PFX(SmmRelocationSemaphoreComplete32) +ASM_PFX(SmmRelocationSemaphoreComplete32): + push eax + mov eax, strict dword 0 ; source operand will be pa= tched +ASM_PFX(gPatchRebasedFlagAddr32): + mov byte [eax], 1 + pop eax + jmp dword [dword 0] ; destination will be patch= ed +ASM_PFX(gPatchSmmRelocationOriginalAddressPtr32): + +BITS 64 +global ASM_PFX(SmmInitFixupAddress) +ASM_PFX(SmmInitFixupAddress): + lea rax, [@LongMode] + lea rcx, [@PatchLongModeOffset - 6] + mov dword [rcx], eax + + lea rax, [ASM_PFX(SmmStartup)] + lea rcx, [@L1] + mov qword [rcx], rax + ret -- 2.16.2.windows.1 -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#117610): https://edk2.groups.io/g/devel/message/117610 Mute This Topic: https://groups.io/mt/105441989/7686176 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io] -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- --_000_MN6PR11MB824463D2573B81AE262F338E8C052MN6PR11MB8244namp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable
Not sure if "--find-copies-harder" helps to detect that the new f= iles are copies of existing files.
Can you try in next version if some comments request a new version of patch= ?

Thanks,
Ray

From: Wu, Jiaxin <jiaxin= .wu@intel.com>
Sent: Wednesday, April 10, 2024 21:57
To: devel@edk2.groups.io <devel@edk2.groups.io>
Cc: Ni, Ray <ray.ni@intel.com>; Zeng, Star <star.zeng@intel= .com>; Gerd Hoffmann <kraxel@redhat.com>; Kumar, Rahul R <rahul= .r.kumar@intel.com>
Subject: [PATCH v1 02/13] UefiCpuPkg/SmmRelocationLib: Add SmmReloca= tionLib library instance
 
This patch separates the smbase relocation logic f= rom
PiSmmCpuDxeSmm driver, and moves to the
SmmRelocationInit interface.

Platform shall consume the interface for the smbase
relocation if need SMM support.

Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
 .../Library/SmmRelocationLib/Ia32/Semaphore.c    =   |  43 ++
 .../Library/SmmRelocationLib/Ia32/SmmInit.nasm    = ; | 157 +++++
 .../SmmRelocationLib/InternalSmmRelocationLib.h    | 1= 41 +++++
 .../Library/SmmRelocationLib/SmmRelocationLib.c    | 6= 59 +++++++++++++++++++++
 .../Library/SmmRelocationLib/SmmRelocationLib.inf  |  61 ++=
 .../SmmRelocationLib/SmramSaveStateConfig.c    &n= bsp;   |  91 +++
 .../Library/SmmRelocationLib/X64/Semaphore.c    &= nbsp;  |  70 +++
 .../Library/SmmRelocationLib/X64/SmmInit.nasm    =   | 207 +++++++
 8 files changed, 1429 insertions(+)
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore= .c
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.n= asm
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRel= ocationLib.h
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationL= ib.c
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationL= ib.inf
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/SmramSaveState= Config.c
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.= c
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.na= sm

diff --git a/UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore.c b/UefiCpu= Pkg/Library/SmmRelocationLib/Ia32/Semaphore.c
new file mode 100644
index 0000000000..ace3221cfc
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore.c
@@ -0,0 +1,43 @@
+/** @file
+  Semaphore mechanism to indicate to the BSP that an AP has exited SM= M
+  after SMBASE relocation.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR&g= t;
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#include "InternalSmmRelocationLib.h"
+
+UINTN           &nb= sp; mSmmRelocationOriginalAddress;
+volatile BOOLEAN  *mRebasedFlag;
+
+/**
+  Hook return address of SMM Save State so that semaphore code
+  can be executed immediately after AP exits SMM to indicate to
+  the BSP that an AP has exited SMM after SMBASE relocation.
+
+  @param[in] CpuIndex     The processor index. +  @param[in] RebasedFlag  A pointer to a flag that is set to TRU= E
+            &n= bsp;            = ; immediately after AP exits SMM.
+
+**/
+VOID
+SemaphoreHook (
+  IN UINTN          = ;   CpuIndex,
+  IN volatile BOOLEAN  *RebasedFlag
+  )
+{
+  SMRAM_SAVE_STATE_MAP  *CpuState;
+
+  mRebasedFlag =3D RebasedFlag;
+
+  CpuState =3D (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + S= MRAM_SAVE_STATE_MAP_OFFSET);
+
+  mSmmRelocationOriginalAddress =3D (UINTN)HookReturnFromSmm (
+            &n= bsp;            = ;            &n= bsp;     CpuIndex,
+            &n= bsp;            = ;            &n= bsp;     CpuState,
+            &n= bsp;            = ;            &n= bsp;     (UINT64)(UINTN)&SmmRelocationSemaphoreComp= lete,
+            &n= bsp;            = ;            &n= bsp;     (UINT64)(UINTN)&SmmRelocationSemaphoreComp= lete
+            &n= bsp;            = ;            &n= bsp;     );
+}
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm b/UefiCp= uPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm
new file mode 100644
index 0000000000..cb8b030693
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm
@@ -0,0 +1,157 @@
+;-------------------------------------------------------------------------= ----- ;
+; Copyright (c) 2024, Intel Corporation. All rights reserved.<BR> +; SPDX-License-Identifier: BSD-2-Clause-Patent
+;
+; Module Name:
+;
+;   SmmInit.nasm
+;
+; Abstract:
+;
+;   Functions for relocating SMBASE's for all processors
+;
+;-------------------------------------------------------------------------= ------
+
+%include "StuffRsbNasm.inc"
+
+global  ASM_PFX(gcSmiIdtr)
+global  ASM_PFX(gcSmiGdtr)
+
+extern ASM_PFX(SmmInitHandler)
+extern ASM_PFX(mRebasedFlag)
+extern ASM_PFX(mSmmRelocationOriginalAddress)
+
+global ASM_PFX(gPatchSmmCr3)
+global ASM_PFX(gPatchSmmCr4)
+global ASM_PFX(gPatchSmmCr0)
+global ASM_PFX(gPatchSmmInitStack)
+global ASM_PFX(gcSmmInitSize)
+global ASM_PFX(gcSmmInitTemplate)
+
+%define PROTECT_MODE_CS 0x8
+%define PROTECT_MODE_DS 0x20
+
+    SECTION .data
+
+NullSeg: DQ 0          &= nbsp;        ; reserved by architecture<= br> +CodeSeg32:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x9b
+            DB = ;     0xcf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+ProtModeCodeSeg32:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x9b
+            DB = ;     0xcf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+ProtModeSsSeg32:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x93
+            DB = ;     0xcf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+DataSeg32:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x93
+            DB = ;     0xcf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+CodeSeg16:
+            DW = ;     -1
+            DW = ;     0
+            DB = ;     0
+            DB = ;     0x9b
+            DB = ;     0x8f
+            DB = ;     0
+DataSeg16:
+            DW = ;     -1
+            DW = ;     0
+            DB = ;     0
+            DB = ;     0x93
+            DB = ;     0x8f
+            DB = ;     0
+CodeSeg64:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x9b
+            DB = ;     0xaf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+GDT_SIZE equ $ - NullSeg
+
+ASM_PFX(gcSmiGdtr):
+    DW      GDT_SIZE - 1
+    DD      NullSeg
+
+ASM_PFX(gcSmiIdtr):
+    DW      0
+    DD      0
+
+
+    SECTION .text
+
+global ASM_PFX(SmmStartup)
+
+BITS 16
+ASM_PFX(SmmStartup):
+    ;mov     eax, 0x80000001  = ;           ; read capabi= lity
+    ;cpuid
+    ;mov     ebx, edx   =             &nb= sp;    ; rdmsr will change edx. keep it in ebx.
+    ;and     ebx, BIT20  &nbs= p;            &= nbsp;  ; extract NX capability bit
+    ;shr     ebx, 9   &n= bsp;            = ;      ; shift bit to IA32_EFER.NXE[BIT11] positio= n
+    mov     eax, strict dword 0 &n= bsp;       ; source operand will be patched +ASM_PFX(gPatchSmmCr3):
+    mov     cr3, eax
+o32 lgdt    [cs:ebp + (ASM_PFX(gcSmiGdtr) - ASM_PFX(SmmStar= tup))]
+    mov     eax, strict dword 0 &n= bsp;       ; source operand will be patched +ASM_PFX(gPatchSmmCr4):
+    mov     cr4, eax
+    ;mov     ecx, 0xc0000080  = ;           ; IA32_EFER M= SR
+    ;rdmsr
+    ;or      eax, ebx  &= nbsp;           &nbs= p;     ; set NXE bit if NX is available
+    ;wrmsr
+    mov     eax, strict dword 0 &n= bsp;       ; source operand will be patched +ASM_PFX(gPatchSmmCr0):
+    mov     di, PROTECT_MODE_DS
+    mov     cr0, eax
+    jmp     PROTECT_MODE_CS : dword @32= bit
+
+BITS 32
+@32bit:
+    mov     ds, edi
+    mov     es, edi
+    mov     fs, edi
+    mov     gs, edi
+    mov     ss, edi
+    mov     esp, strict dword 0 &n= bsp;       ; source operand will be patched +ASM_PFX(gPatchSmmInitStack):
+    call    ASM_PFX(SmmInitHandler)
+    StuffRsb32
+    rsm
+
+BITS 16
+ASM_PFX(gcSmmInitTemplate):
+    mov ebp, ASM_PFX(SmmStartup)
+    sub ebp, 0x30000
+    jmp ebp
+
+ASM_PFX(gcSmmInitSize): DW $ - ASM_PFX(gcSmmInitTemplate)
+
+BITS 32
+global ASM_PFX(SmmRelocationSemaphoreComplete)
+ASM_PFX(SmmRelocationSemaphoreComplete):
+    push    eax
+    mov     eax, [ASM_PFX(mRebasedFlag)= ]
+    mov     byte [eax], 1
+    pop     eax
+    jmp     [ASM_PFX(mSmmRelocationOrig= inalAddress)]
+
+global ASM_PFX(SmmInitFixupAddress)
+ASM_PFX(SmmInitFixupAddress):
+    ret
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h= b/UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h
new file mode 100644
index 0000000000..c8647fbfe7
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h
@@ -0,0 +1,141 @@
+/** @file
+  SMM Relocation Lib for each processor.
+
+  This Lib produces the SMM_BASE_HOB in HOB database which tells
+  the PiSmmCpuDxeSmm driver (runs at a later phase) about the new
+  SMBASE for each processor. PiSmmCpuDxeSmm driver installs the
+  SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000 for processor<= br> +  Index.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR&g= t;
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#ifndef INTERNAL_SMM_RELOCATION_LIB_H_
+#define INTERNAL_SMM_RELOCATION_LIB_H_
+
+#include <PiPei.h>
+#include <Library/BaseLib.h>
+#include <Library/BaseMemoryLib.h>
+#include <Library/CpuExceptionHandlerLib.h>
+#include <Library/DebugLib.h>
+#include <Library/HobLib.h>
+#include <Library/LocalApicLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Library/PcdLib.h>
+#include <Library/PeimEntryPoint.h>
+#include <Library/PeiServicesLib.h>
+#include <Library/SmmRelocationLib.h>
+#include <Guid/SmramMemoryReserve.h>
+#include <Guid/SmmBaseHob.h>
+#include <Register/Intel/Cpuid.h>
+#include <Register/Intel/SmramSaveStateMap.h>
+#include <Protocol/MmCpu.h>
+
+extern UINT64  *mSmBaseForAllCpus;
+extern UINT8   mSmmSaveStateRegisterLma;
+
+extern IA32_DESCRIPTOR  gcSmiGdtr;
+extern IA32_DESCRIPTOR  gcSmiIdtr;
+extern CONST UINT16     gcSmmInitSize;
+extern CONST UINT8      gcSmmInitTemplate[];
+
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmCr0;
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmCr3;
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmCr4;
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmInitStack;
+
+//
+// The size 0x20 must be bigger than
+// the size of template code of SmmInit. Currently,
+// the size of SmmInit requires the 0x16 Bytes buffer
+// at least.
+//
+#define BACK_BUF_SIZE  0x20
+
+#define CR4_CET_ENABLE  BIT23
+
+//
+// EFER register LMA bit
+//
+#define LMA  BIT10
+
+/**
+  This function configures the SmBase on the currently executing CPU.=
+
+  @param[in]     CpuIndex    =          The index of the CPU.
+  @param[in,out] CpuState       &n= bsp;     Pointer to SMRAM Save State Map for the
+            &n= bsp;            = ;             c= urrently executing CPU. On out, SmBase is
+            &n= bsp;            = ;             u= pdated to the new value.
+
+**/
+VOID
+EFIAPI
+ConfigureSmBase (
+  IN     UINTN      = ;           CpuIndex,
+  IN OUT SMRAM_SAVE_STATE_MAP  *CpuState
+  );
+
+/**
+  Semaphore operation for all processor relocate SMMBase.
+**/
+VOID
+EFIAPI
+SmmRelocationSemaphoreComplete (
+  VOID
+  );
+
+/**
+  Hook the code executed immediately after an RSM instruction on the = currently
+  executing CPU.  The mode of code executed immediately after RS= M must be
+  detected, and the appropriate hook must be selected.  Always c= lear the auto
+  HALT restart flag if it is set.
+
+  @param[in]     CpuIndex    =              Th= e processor index for the currently
+            &n= bsp;            = ;            &n= bsp;    executing CPU.
+  @param[in,out] CpuState       &n= bsp;         Pointer to SMRAM Save = State Map for the
+            &n= bsp;            = ;            &n= bsp;    currently executing CPU.
+  @param[in]     NewInstructionPointer32  In= struction pointer to use if resuming to
+            &n= bsp;            = ;            &n= bsp;    32-bit mode from 64-bit SMM.
+  @param[in]     NewInstructionPointer  = ;  Instruction pointer to use if resuming to
+            &n= bsp;            = ;            &n= bsp;    same mode as SMM.
+
+  @retval The value of the original instruction pointer before it was= hooked.
+
+**/
+UINT64
+EFIAPI
+HookReturnFromSmm (
+  IN     UINTN      = ;           CpuIndex,
+  IN OUT SMRAM_SAVE_STATE_MAP  *CpuState,
+  IN     UINT64     &nbs= p;          NewInstructionPoin= ter32,
+  IN     UINT64     &nbs= p;          NewInstructionPoin= ter
+  );
+
+/**
+  Hook return address of SMM Save State so that semaphore code
+  can be executed immediately after AP exits SMM to indicate to
+  the BSP that an AP has exited SMM after SMBASE relocation.
+
+  @param[in] CpuIndex     The processor index. +  @param[in] RebasedFlag  A pointer to a flag that is set to TRU= E
+            &n= bsp;            = ; immediately after AP exits SMM.
+
+**/
+VOID
+SemaphoreHook (
+  IN UINTN          = ;   CpuIndex,
+  IN volatile BOOLEAN  *RebasedFlag
+  );
+
+/**
+  This function fixes up the address of the global variable or functi= on
+  referred in SmmInit assembly files to be the absolute address.
+**/
+VOID
+EFIAPI
+SmmInitFixupAddress (
+  );
+
+#endif
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.c b/UefiC= puPkg/Library/SmmRelocationLib/SmmRelocationLib.c
new file mode 100644
index 0000000000..38bad24e7a
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.c
@@ -0,0 +1,659 @@
+/** @file
+  SMM Relocation Lib for each processor.
+
+  This Lib produces the SMM_BASE_HOB in HOB database which tells
+  the PiSmmCpuDxeSmm driver (runs at a later phase) about the new
+  SMBASE for each processor. PiSmmCpuDxeSmm driver installs the
+  SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000 for processor<= br> +  Index.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR&g= t;
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+#include "InternalSmmRelocationLib.h"
+
+UINTN   mMaxNumberOfCpus   =3D 1;
+UINTN   mNumberOfCpus      =3D 1;
+UINT64  *mSmBaseForAllCpus =3D NULL;
+
+//
+// The mode of the CPU at the time an SMI occurs
+//
+UINT8  mSmmSaveStateRegisterLma;
+
+//
+// Record all Processors Info
+//
+EFI_PROCESSOR_INFORMATION  *mProcessorInfo =3D NULL;
+
+//
+// SmBase Rebased or not
+//
+volatile BOOLEAN  *mRebased;
+
+/**
+  C function for SMI handler. To change all processor's SMMBase Regis= ter.
+
+**/
+VOID
+EFIAPI
+SmmInitHandler (
+  VOID
+  )
+{
+  UINT32  ApicId;
+  UINTN   Index;
+
+  SMRAM_SAVE_STATE_MAP  *CpuState;
+
+  //
+  // Update SMM IDT entries' code segment and load IDT
+  //
+  AsmWriteIdtr (&gcSmiIdtr);
+  ApicId =3D GetApicId ();
+
+  ASSERT (mNumberOfCpus <=3D mMaxNumberOfCpus);
+
+  for (Index =3D 0; Index < mNumberOfCpus; Index++) {
+    if (ApicId =3D=3D (UINT32)mProcessorInfo[Index].Process= orId) {
+      //
+      // Configure SmBase.
+      //
+      CpuState =3D (SMRAM_SAVE_STATE_MAP *)(UINTN= )(SMM_DEFAULT_SMBASE + SMRAM_SAVE_STATE_MAP_OFFSET);
+      ConfigureSmBase (Index, CpuState);
+
+      //
+      // Hook return after RSM to set SMM re-base= d flag
+      // SMM re-based flag can't be set before RS= M, because SMM save state context might be override
+      // by next AP flow before it take effect. +      //
+      SemaphoreHook (Index, &mRebased[Index])= ;
+      return;
+    }
+  }
+
+  ASSERT (FALSE);
+}
+
+/**
+  This routine will split SmramReserve HOB to reserve SmmRelocationSi= ze for Smm relocated memory.
+
+  @param[in]       SmmRelocationSize&nb= sp;  SmmRelocationSize for all processors.
+  @param[in,out]   SmmRelocationStart  Return the star= t address of Smm relocated memory in SMRAM.
+
+  @retval EFI_SUCCESS        =    The gEfiSmmSmramMemoryGuid is split successfully.
+  @retval EFI_DEVICE_ERROR      Failed to bu= ild new HOB for gEfiSmmSmramMemoryGuid.
+  @retval EFI_NOT_FOUND       &nbs= p; The gEfiSmmSmramMemoryGuid is not found.
+
+**/
+EFI_STATUS
+SplitSmramHobForSmmRelocation (
+  IN     UINT64     &nbs= p;          SmmRelocationSize,=
+  IN OUT EFI_PHYSICAL_ADDRESS  *SmmRelocationStart
+  )
+{
+  EFI_HOB_GUID_TYPE        &n= bsp;      *GuidHob;
+  EFI_SMRAM_HOB_DESCRIPTOR_BLOCK  *DescriptorBlock;
+  EFI_SMRAM_HOB_DESCRIPTOR_BLOCK  *NewDescriptorBlock;
+  UINTN          &n= bsp;            = ;    BufferSize;
+  UINTN          &n= bsp;            = ;    SmramRanges;
+
+  NewDescriptorBlock =3D NULL;
+
+  //
+  // Retrieve the GUID HOB data that contains the set of SMRAM descri= ptors
+  //
+  GuidHob =3D GetFirstGuidHob (&gEfiSmmSmramMemoryGuid);
+  if (GuidHob =3D=3D NULL) {
+    return EFI_NOT_FOUND;
+  }
+
+  DescriptorBlock =3D (EFI_SMRAM_HOB_DESCRIPTOR_BLOCK *)GET_GUID_HOB_= DATA (GuidHob);
+
+  //
+  // Allocate one extra EFI_SMRAM_DESCRIPTOR to describe SMRAM memory= that contains a pointer
+  // to the Smm relocated memory.
+  //
+  SmramRanges =3D DescriptorBlock->NumberOfSmmReservedRegions;
+  BufferSize  =3D sizeof (EFI_SMRAM_HOB_DESCRIPTOR_BLOCK) + (Smr= amRanges * sizeof (EFI_SMRAM_DESCRIPTOR));
+
+  NewDescriptorBlock =3D (EFI_SMRAM_HOB_DESCRIPTOR_BLOCK *)BuildGuidH= ob (
+            &n= bsp;            = ;            &n= bsp;            = ;         &gEfiSmmSmramMemoryGu= id,
+            &n= bsp;            = ;            &n= bsp;            = ;         BufferSize
+            &n= bsp;            = ;            &n= bsp;            = ;         );
+  ASSERT (NewDescriptorBlock !=3D NULL);
+  if (NewDescriptorBlock =3D=3D NULL) {
+    return EFI_DEVICE_ERROR;
+  }
+
+  //
+  // Copy old EFI_SMRAM_HOB_DESCRIPTOR_BLOCK to new allocated region<= br> +  //
+  CopyMem ((VOID *)NewDescriptorBlock, DescriptorBlock, BufferSize - = sizeof (EFI_SMRAM_DESCRIPTOR));
+
+  //
+  // Increase the number of SMRAM descriptors by 1 to make room for t= he ALLOCATED descriptor of size EFI_PAGE_SIZE
+  //
+  NewDescriptorBlock->NumberOfSmmReservedRegions =3D (UINT32)(Smra= mRanges + 1);
+
+  ASSERT (SmramRanges >=3D 1);
+  //
+  // Copy last entry to the end - we assume TSEG is last entry.
+  //
+  CopyMem (&NewDescriptorBlock->Descriptor[SmramRanges], &= NewDescriptorBlock->Descriptor[SmramRanges - 1], sizeof (EFI_SMRAM_DESCR= IPTOR));
+
+  //
+  // Update the entry in the array with a size of SmmRelocationSize a= nd put into the ALLOCATED state
+  //
+  NewDescriptorBlock->Descriptor[SmramRanges - 1].PhysicalSize =3D= SmmRelocationSize;
+  NewDescriptorBlock->Descriptor[SmramRanges - 1].RegionState |=3D= EFI_ALLOCATED;
+
+  //
+  // Return the start address of Smm relocated memory in SMRAM.
+  //
+  if (SmmRelocationStart !=3D NULL) {
+    *SmmRelocationStart =3D NewDescriptorBlock->Descript= or[SmramRanges - 1].CpuStart;
+  }
+
+  //
+  // Reduce the size of the last SMRAM descriptor by SmmRelocationSiz= e
+  //
+  NewDescriptorBlock->Descriptor[SmramRanges].PhysicalStart +=3D S= mmRelocationSize;
+  NewDescriptorBlock->Descriptor[SmramRanges].CpuStart  =     +=3D SmmRelocationSize;
+  NewDescriptorBlock->Descriptor[SmramRanges].PhysicalSize  -= =3D SmmRelocationSize;
+
+  //
+  // Last step, we can scrub old one
+  //
+  ZeroMem (&GuidHob->Name, sizeof (GuidHob->Name));
+
+  return EFI_SUCCESS;
+}
+
+/**
+  This function will create SmBase for all CPUs.
+
+  @param[in] SmBaseForAllCpus    Pointer to SmBase for= all CPUs.
+
+  @retval EFI_SUCCESS        =    Create SmBase for all CPUs successfully.
+  @retval Others         = ;       Failed to create SmBase for all CPUs.=
+
+**/
+EFI_STATUS
+CreateSmmBaseHob (
+  IN UINT64  *SmBaseForAllCpus
+  )
+{
+  UINTN          &n= bsp;   Index;
+  SMM_BASE_HOB_DATA  *SmmBaseHobData;
+  UINT32          &= nbsp;  CpuCount;
+  UINT32          &= nbsp;  NumberOfProcessorsInHob;
+  UINT32          &= nbsp;  MaxCapOfProcessorsInHob;
+  UINT32          &= nbsp;  HobCount;
+
+  SmmBaseHobData         = ; =3D NULL;
+  CpuCount          = ;      =3D 0;
+  NumberOfProcessorsInHob =3D 0;
+  MaxCapOfProcessorsInHob =3D 0;
+  HobCount          = ;      =3D 0;
+
+  //
+  // Count the HOB instance maximum capacity of CPU (MaxCapOfProcesso= rsInHob) since the max HobLength is 0xFFF8.
+  //
+  MaxCapOfProcessorsInHob =3D (0xFFF8 - sizeof (EFI_HOB_GUID_TYPE) - = sizeof (SMM_BASE_HOB_DATA)) / sizeof (UINT64) + 1;
+  DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - MaxCapOfProcessorsInHo= b: %03x\n", MaxCapOfProcessorsInHob));
+
+  //
+  // Create Guided SMM Base HOB Instances.
+  //
+  while (CpuCount !=3D mMaxNumberOfCpus) {
+    NumberOfProcessorsInHob =3D MIN ((UINT32)mMaxNumberOfCp= us - CpuCount, MaxCapOfProcessorsInHob);
+
+    SmmBaseHobData =3D BuildGuidHob (
+            &n= bsp;          &gSmmBaseHob= Guid,
+            &n= bsp;          sizeof (SMM_BASE= _HOB_DATA) + sizeof (UINT64) * NumberOfProcessorsInHob
+            &n= bsp;          );
+    if (SmmBaseHobData =3D=3D NULL) {
+      return EFI_OUT_OF_RESOURCES;
+    }
+
+    SmmBaseHobData->ProcessorIndex   &nbs= p; =3D CpuCount;
+    SmmBaseHobData->NumberOfProcessors =3D NumberOfProce= ssorsInHob;
+
+    DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - SmmBaseHob= Data[%d]->ProcessorIndex: %03x\n", HobCount, SmmBaseHobData->Pro= cessorIndex));
+    DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - SmmBaseHob= Data[%d]->NumberOfProcessors: %03x\n", HobCount, SmmBaseHobData->= ;NumberOfProcessors));
+    for (Index =3D 0; Index < SmmBaseHobData->NumberO= fProcessors; Index++) {
+      //
+      // Calculate the new SMBASE address
+      //
+      SmmBaseHobData->SmBase[Index] =3D SmBase= ForAllCpus[Index + CpuCount];
+      DEBUG ((DEBUG_INFO, "CreateSmmBaseHob = - SmmBaseHobData[%d]->SmBase[%03x]: %08x\n", HobCount, Index, SmmBa= seHobData->SmBase[Index]));
+    }
+
+    CpuCount +=3D NumberOfProcessorsInHob;
+    HobCount++;
+    SmmBaseHobData =3D NULL;
+  }
+
+  return EFI_SUCCESS;
+}
+
+/**
+  Relocate SmmBases for each processor.
+  Execute on first boot and all S3 resumes
+
+**/
+VOID
+SmmRelocateBases (
+  VOID
+  )
+{
+  UINT8          &n= bsp;      BakBuf[BACK_BUF_SIZE];
+  SMRAM_SAVE_STATE_MAP  BakBuf2;
+  SMRAM_SAVE_STATE_MAP  *CpuStatePtr;
+  UINT8          &n= bsp;      *U8Ptr;
+  UINTN          &n= bsp;      Index;
+  UINTN          &n= bsp;      BspIndex;
+  UINT32          &= nbsp;     BspApicId;
+
+  //
+  // Make sure the reserved size is large enough for procedure SmmIni= tTemplate.
+  //
+  ASSERT (sizeof (BakBuf) >=3D gcSmmInitSize);
+
+  //
+  // Patch ASM code template with current CR0, CR3, and CR4 values +  //
+  PatchInstructionX86 (gPatchSmmCr0, AsmReadCr0 (), 4);
+  PatchInstructionX86 (gPatchSmmCr3, AsmReadCr3 (), 4);
+  PatchInstructionX86 (gPatchSmmCr4, AsmReadCr4 () & (~CR4_CET_EN= ABLE), 4);
+
+  U8Ptr       =3D (UINT8 *)(UINTN)(SMM_= DEFAULT_SMBASE + SMM_HANDLER_OFFSET);
+  CpuStatePtr =3D (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE = + SMRAM_SAVE_STATE_MAP_OFFSET);
+
+  //
+  // Backup original contents at address 0x38000
+  //
+  CopyMem (BakBuf, U8Ptr, sizeof (BakBuf));
+  CopyMem (&BakBuf2, CpuStatePtr, sizeof (BakBuf2));
+
+  //
+  // Load image for relocation
+  //
+  CopyMem (U8Ptr, gcSmmInitTemplate, gcSmmInitSize);
+
+  //
+  // Retrieve the local APIC ID of current processor
+  //
+  BspApicId =3D GetApicId ();
+
+  //
+  // Relocate SM bases for all APs
+  // This is APs' 1st SMI - rebase will be done here, and APs' defaul= t SMI handler will be overridden by gcSmmInitTemplate
+  //
+  BspIndex =3D (UINTN)-1;
+  for (Index =3D 0; Index < mNumberOfCpus; Index++) {
+    mRebased[Index] =3D FALSE;
+    if (BspApicId !=3D (UINT32)mProcessorInfo[Index].Proces= sorId) {
+      SendSmiIpi ((UINT32)mProcessorInfo[Index].P= rocessorId);
+      //
+      // Wait for this AP to finish its 1st SMI +      //
+      while (!mRebased[Index]) {
+      }
+    } else {
+      //
+      // BSP will be Relocated later
+      //
+      BspIndex =3D Index;
+    }
+  }
+
+  //
+  // Relocate BSP's SMM base
+  //
+  ASSERT (BspIndex !=3D (UINTN)-1);
+  SendSmiIpi (BspApicId);
+
+  //
+  // Wait for the BSP to finish its 1st SMI
+  //
+  while (!mRebased[BspIndex]) {
+  }
+
+  //
+  // Restore contents at address 0x38000
+  //
+  CopyMem (CpuStatePtr, &BakBuf2, sizeof (BakBuf2));
+  CopyMem (U8Ptr, BakBuf, sizeof (BakBuf));
+}
+
+/**
+  This function will initialize SmBase for all CPUs.
+
+  @param[in,out] SmBaseForAllCpus    Pointer to SmBase= for all CPUs.
+
+  @retval EFI_SUCCESS        =    Initialize SmBase for all CPUs successfully.
+  @retval Others         = ;       Failed to initialize SmBase for all C= PUs.
+
+**/
+EFI_STATUS
+InitSmBaseForAllCpus (
+  IN OUT UINT64  **SmBaseForAllCpus
+  )
+{
+  EFI_STATUS         &nb= sp;  Status;
+  UINTN          &n= bsp;      TileSize;
+  UINT64          &= nbsp;     SmmRelocationSize;
+  EFI_PHYSICAL_ADDRESS  SmmRelocationStart;
+  UINTN          &n= bsp;      Index;
+
+  SmmRelocationStart =3D 0;
+
+  ASSERT (SmBaseForAllCpus !=3D NULL);
+
+  //
+  // Calculate SmmRelocationSize for all of the tiles.
+  //
+  // The CPU save state and code for the SMI entry point are tiled wi= thin an SMRAM
+  // allocated buffer. The minimum size of this buffer for a uniproce= ssor system
+  // is 32 KB, because the entry point is SMBASE + 32KB, and CPU save= state area
+  // just below SMBASE + 64KB. If more than one CPU is present in the= platform,
+  // then the SMI entry point and the CPU save state areas can be til= es to minimize
+  // the total amount SMRAM required for all the CPUs. The tile size = can be computed
+  // by adding the CPU save state size, any extra CPU specific contex= t, and
+  // the size of code that must be placed at the SMI entry point to t= ransfer
+  // control to a C function in the native SMM execution mode. This s= ize is
+  // rounded up to the nearest power of 2 to give the tile size for a= each CPU.
+  // The total amount of memory required is the maximum number of CPU= s that
+  // platform supports times the tile size.
+  //
+  TileSize          =3D = SIZE_8KB;
+  SmmRelocationSize =3D EFI_PAGES_TO_SIZE (EFI_SIZE_TO_PAGES (SIZE_32= KB + TileSize * (mMaxNumberOfCpus - 1)));
+
+  //
+  // Split SmramReserve HOB to reserve SmmRelocationSize for Smm relo= cated memory
+  //
+  Status =3D SplitSmramHobForSmmRelocation (
+             S= mmRelocationSize,
+             &= amp;SmmRelocationStart
+             )= ;
+  if (EFI_ERROR (Status)) {
+    return Status;
+  }
+
+  ASSERT (SmmRelocationStart !=3D 0);
+  DEBUG ((DEBUG_INFO, "InitSmBaseForAllCpus - SmmRelocationSize:= 0x%08x\n", SmmRelocationSize));
+  DEBUG ((DEBUG_INFO, "InitSmBaseForAllCpus - SmmRelocationStart= : 0x%08x\n", SmmRelocationStart));
+
+  //
+  // Init SmBaseForAllCpus
+  //
+  *SmBaseForAllCpus =3D (UINT64 *)AllocatePages (EFI_SIZE_TO_PAGES (s= izeof (UINT64) * mMaxNumberOfCpus));
+  if (*SmBaseForAllCpus =3D=3D NULL) {
+    return EFI_OUT_OF_RESOURCES;
+  }
+
+  for (Index =3D 0; Index < mMaxNumberOfCpus; Index++) {
+    //
+    // Return each SmBase in SmBaseForAllCpus
+    //
+    (*SmBaseForAllCpus)[Index] =3D (UINTN)(SmmRelocationSta= rt)+ Index * TileSize - SMM_HANDLER_OFFSET;
+    DEBUG ((DEBUG_INFO, "InitSmBaseForAllCpus - SmBase= For CPU[%03x]: %08x\n", Index, (*SmBaseForAllCpus)[Index]));
+  }
+
+  return EFI_SUCCESS;
+}
+
+/**
+  Initialize IDT to setup exception handlers in SMM.
+
+**/
+VOID
+InitSmmIdt (
+  VOID
+  )
+{
+  EFI_STATUS         &nb= sp;    Status;
+  BOOLEAN          =        InterruptState;
+  IA32_DESCRIPTOR         Pei= Idtr;
+  CONST EFI_PEI_SERVICES  **PeiServices;
+
+  //
+  // There are 32 (not 255) entries in it since only processor
+  // generated exceptions will be handled.
+  //
+  gcSmiIdtr.Limit =3D (sizeof (IA32_IDT_GATE_DESCRIPTOR) * 32) - 1; +
+  //
+  // Allocate for IDT.
+  // sizeof (UINTN) is for the PEI Services Table pointer.
+  //
+  gcSmiIdtr.Base =3D (UINTN)AllocateZeroPool (gcSmiIdtr.Limit + 1 + s= izeof (UINTN));
+  ASSERT (gcSmiIdtr.Base !=3D 0);
+  gcSmiIdtr.Base +=3D sizeof (UINTN);
+
+  //
+  // Disable Interrupt, save InterruptState and save PEI IDT table +  //
+  InterruptState =3D SaveAndDisableInterrupts ();
+  AsmReadIdtr (&PeiIdtr);
+
+  //
+  // Save the PEI Services Table pointer
+  // The PEI Services Table pointer will be stored in the sizeof (UIN= TN) bytes
+  // immediately preceding the IDT in memory.
+  //
+  PeiServices         &n= bsp;            = ;             = =3D (CONST EFI_PEI_SERVICES **)(*(UINTN *)(PeiIdtr.Base - sizeof (UINTN)));=
+  (*(UINTN *)(gcSmiIdtr.Base - sizeof (UINTN))) =3D (UINTN)PeiService= s;
+
+  //
+  // Load SMM temporary IDT table
+  //
+  AsmWriteIdtr (&gcSmiIdtr);
+
+  //
+  // Setup SMM default exception handlers, SMM IDT table
+  // will be updated and saved in gcSmiIdtr
+  //
+  Status =3D InitializeCpuExceptionHandlers (NULL);
+  ASSERT_EFI_ERROR (Status);
+
+  //
+  // Restore PEI IDT table and CPU InterruptState
+  //
+  AsmWriteIdtr ((IA32_DESCRIPTOR *)&PeiIdtr);
+  SetInterruptState (InterruptState);
+}
+
+/**
+  Determine the mode of the CPU at the time an SMI occurs
+
+  @retval EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT   32 bit. +  @retval EFI_MM_SAVE_STATE_REGISTER_LMA_64BIT   64 bit. +
+**/
+UINT8
+CheckSmmCpuMode (
+  VOID
+  )
+{
+  UINT32  RegEax;
+  UINT32  RegEdx;
+  UINTN   FamilyId;
+  UINTN   ModelId;
+  UINT8   SmmSaveStateRegisterLma;
+
+  //
+  // Determine the mode of the CPU at the time an SMI occurs
+  //   Intel(R) 64 and IA-32 Architectures Software Develop= er's Manual
+  //   Volume 3C, Section 34.4.1.1
+  //
+  AsmCpuid (CPUID_VERSION_INFO, &RegEax, NULL, NULL, NULL);
+  FamilyId =3D (RegEax >> 8) & 0xf;
+  ModelId  =3D (RegEax >> 4) & 0xf;
+  if ((FamilyId =3D=3D 0x06) || (FamilyId =3D=3D 0x0f)) {
+    ModelId =3D ModelId | ((RegEax >> 12) & 0xf0)= ;
+  }
+
+  RegEdx =3D 0;
+  AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);<= br> +  if (RegEax >=3D CPUID_EXTENDED_CPU_SIG) {
+    AsmCpuid (CPUID_EXTENDED_CPU_SIG, NULL, NULL, NULL, &am= p;RegEdx);
+  }
+
+  SmmSaveStateRegisterLma =3D EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT; +  if ((RegEdx & BIT29) !=3D 0) {
+    SmmSaveStateRegisterLma =3D EFI_MM_SAVE_STATE_REGISTER_= LMA_64BIT;
+  }
+
+  if (FamilyId =3D=3D 0x06) {
+    if ((ModelId =3D=3D 0x17) || (ModelId =3D=3D 0x0f) || (= ModelId =3D=3D 0x1c)) {
+      SmmSaveStateRegisterLma =3D EFI_MM_SAVE_STA= TE_REGISTER_LMA_64BIT;
+    }
+  }
+
+  return SmmSaveStateRegisterLma;
+}
+
+/**
+  CPU SmmBase Relocation Init.
+
+  This function is to relocate CPU SmmBase.
+
+  @param[in] MpServices2        Po= inter to this instance of the MpServices.
+
+  @retval EFI_SUCCESS        =    CPU SmmBase Relocated successfully.
+  @retval Others         = ;       CPU SmmBase Relocation failed.
+
+**/
+EFI_STATUS
+EFIAPI
+SmmRelocationInit (
+  IN EDKII_PEI_MP_SERVICES2_PPI  *MpServices2
+  )
+{
+  EFI_STATUS  Status;
+  UINTN       NumberOfEnabledCpus;
+  UINTN       SmmStackSize;
+  UINT8       *SmmStacks;
+  UINTN       Index;
+
+  SmmStacks =3D NULL;
+
+  DEBUG ((DEBUG_INFO, "SmmRelocationInit Start \n"));
+  if (MpServices2 =3D=3D NULL) {
+    return EFI_INVALID_PARAMETER;
+  }
+
+  //
+  // Fix up the address of the global variable or function referred i= n
+  // SmmInit assembly files to be the absolute address
+  //
+  SmmInitFixupAddress ();
+
+  //
+  // Check the mode of the CPU at the time an SMI occurs
+  //
+  mSmmSaveStateRegisterLma =3D CheckSmmCpuMode ();
+
+  //
+  // Patch SMI stack for SMM base relocation
+  // Note: No need allocate stack for all CPUs since the relocation +  // occurs serially for each CPU
+  //
+  SmmStackSize =3D EFI_PAGES_TO_SIZE (EFI_SIZE_TO_PAGES (PcdGet32 (Pc= dCpuSmmStackSize)));
+  SmmStacks    =3D (UINT8 *)AllocatePages (EFI_SIZE_TO= _PAGES (SmmStackSize));
+  if (SmmStacks =3D=3D NULL) {
+    Status =3D EFI_OUT_OF_RESOURCES;
+    goto ON_EXIT;
+  }
+
+  DEBUG ((DEBUG_INFO, "SmmRelocationInit - SmmStacks: 0x%x\n&quo= t;, SmmStacks));
+  DEBUG ((DEBUG_INFO, "SmmRelocationInit - SmmStackSize: 0x%x\n&= quot;, SmmStackSize));
+
+  PatchInstructionX86 (
+    gPatchSmmInitStack,
+    (UINTN)(SmmStacks + SmmStackSize - sizeof (UINTN)),
+    sizeof (UINTN)
+    );
+
+  //
+  // Initialize the SMM IDT for SMM base relocation
+  //
+  InitSmmIdt ();
+
+  //
+  // Get the number of processors
+  //
+  Status =3D MpServices2->GetNumberOfProcessors (
+            &n= bsp;            = ; MpServices2,
+            &n= bsp;            = ; &mNumberOfCpus,
+            &n= bsp;            = ; &NumberOfEnabledCpus
+            &n= bsp;            = ; );
+  if (EFI_ERROR (Status)) {
+    return Status;
+  }
+
+  if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
+    mMaxNumberOfCpus =3D PcdGet32 (PcdCpuMaxLogicalProcesso= rNumber);
+  } else {
+    mMaxNumberOfCpus =3D mNumberOfCpus;
+  }
+
+  //
+  // Retrieve the Processor Info for all CPUs
+  //
+  mProcessorInfo =3D (EFI_PROCESSOR_INFORMATION *)AllocatePool (sizeo= f (EFI_PROCESSOR_INFORMATION) * mMaxNumberOfCpus);
+  if (mProcessorInfo =3D=3D NULL) {
+    Status =3D EFI_OUT_OF_RESOURCES;
+    goto ON_EXIT;
+  }
+
+  for (Index =3D 0; Index < mMaxNumberOfCpus; Index++) {
+    if (Index < mNumberOfCpus) {
+      Status =3D MpServices2->GetProcessorInfo= (MpServices2, Index | CPU_V2_EXTENDED_TOPOLOGY, &mProcessorInfo[Index]= );
+      if (EFI_ERROR (Status)) {
+        goto ON_EXIT;
+      }
+    }
+  }
+
+  //
+  // Initialize the SmBase for all CPUs
+  //
+  Status =3D InitSmBaseForAllCpus (&mSmBaseForAllCpus);
+  if (EFI_ERROR (Status)) {
+    goto ON_EXIT;
+  }
+
+  //
+  // Relocate SmmBases for each processor.
+  // Allocate mRebased as the flag to indicate the relocation is done= for each CPU.
+  //
+  mRebased =3D (BOOLEAN *)AllocateZeroPool (sizeof (BOOLEAN) * mMaxNu= mberOfCpus);
+  if (mRebased =3D=3D NULL) {
+    Status =3D EFI_OUT_OF_RESOURCES;
+    goto ON_EXIT;
+  }
+
+  SmmRelocateBases ();
+
+  //
+  // Create the SmBase HOB for all CPUs
+  //
+  Status =3D CreateSmmBaseHob (mSmBaseForAllCpus);
+
+ON_EXIT:
+  if (SmmStacks !=3D NULL) {
+    FreePages (SmmStacks, EFI_SIZE_TO_PAGES (SmmStackSize))= ;
+  }
+
+  if (mSmBaseForAllCpus !=3D NULL) {
+    FreePages (mSmBaseForAllCpus, EFI_SIZE_TO_PAGES (sizeof= (UINT64) * mMaxNumberOfCpus));
+  }
+
+  DEBUG ((DEBUG_INFO, "SmmRelocationInit Done\n"));
+  return Status;
+}
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf b/Uef= iCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf
new file mode 100644
index 0000000000..2ac16ab5d1
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf
@@ -0,0 +1,61 @@
+## @file
+# SMM Relocation Lib for each processor.
+#
+# This Lib produces the SMM_BASE_HOB in HOB database which tells
+# the PiSmmCpuDxeSmm driver (runs at a later phase) about the new
+# SMBASE for each processor. PiSmmCpuDxeSmm driver installs the
+# SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000 for processor
+# Index.
+#
+# Copyright (c) 2024, Intel Corporation. All rights reserved.<BR> +# SPDX-License-Identifier: BSD-2-Clause-Patent
+#
+##
+
+[Defines]
+  INF_VERSION         &n= bsp;          =3D 0x00010005 +  BASE_NAME         &nbs= p;            =3D Sm= mRelocationLib
+  FILE_GUID         &nbs= p;            =3D 85= 3E97B3-790C-4EA3-945C-8F622FC47FE8
+  MODULE_TYPE         &n= bsp;          =3D PEIM
+  VERSION_STRING         = ;        =3D 1.0
+  LIBRARY_CLASS         =          =3D SmmRelocationLib
+
+[Sources]
+  InternalSmmRelocationLib.h
+  SmramSaveStateConfig.c
+  SmmRelocationLib.c
+
+[Sources.Ia32]
+  Ia32/Semaphore.c
+  Ia32/SmmInit.nasm
+
+[Sources.X64]
+  X64/Semaphore.c
+  X64/SmmInit.nasm
+
+[Packages]
+  MdePkg/MdePkg.dec
+  MdeModulePkg/MdeModulePkg.dec
+  UefiCpuPkg/UefiCpuPkg.dec
+
+[LibraryClasses]
+  BaseLib
+  BaseMemoryLib
+  CpuExceptionHandlerLib
+  DebugLib
+  HobLib
+  LocalApicLib
+  MemoryAllocationLib
+  PcdLib
+  PeiServicesLib
+
+[Guids]
+  gSmmBaseHobGuid        &nbs= p;            &= nbsp;         ## HOB ALWAYS_PRODUCE= D
+  gEfiSmmSmramMemoryGuid       &nb= sp;            =     ## CONSUMES
+
+[Pcd]
+  gUefiCpuPkgTokenSpaceGuid.PcdCpuMaxLogicalProcessorNumber
+  gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStackSize    = ;            &n= bsp;    ## CONSUMES
+
+[FeaturePcd]
+  gUefiCpuPkgTokenSpaceGuid.PcdCpuHotPlugSupport   &nb= sp;            =         ## CONSUMES
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c b/U= efiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c
new file mode 100644
index 0000000000..3982158979
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c
@@ -0,0 +1,91 @@
+/** @file
+  Config SMRAM Save State for SmmBases Relocation.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR&g= t;
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+#include "InternalSmmRelocationLib.h"
+
+/**
+  This function configures the SmBase on the currently executing CPU.=
+
+  @param[in]     CpuIndex    =          The index of the CPU.
+  @param[in,out] CpuState       &n= bsp;     Pointer to SMRAM Save State Map for the
+            &n= bsp;            = ;             c= urrently executing CPU. On out, SmBase is
+            &n= bsp;            = ;             u= pdated to the new value.
+
+**/
+VOID
+EFIAPI
+ConfigureSmBase (
+  IN     UINTN      = ;           CpuIndex,
+  IN OUT SMRAM_SAVE_STATE_MAP  *CpuState
+  )
+{
+  if (mSmmSaveStateRegisterLma =3D=3D EFI_MM_SAVE_STATE_REGISTER_LMA_= 32BIT) {
+    CpuState->x86.SMBASE =3D (UINT32)mSmBaseForAllCpus[C= puIndex];
+  } else {
+    CpuState->x64.SMBASE =3D (UINT32)mSmBaseForAllCpus[C= puIndex];
+  }
+}
+
+/**
+  Hook the code executed immediately after an RSM instruction on the = currently
+  executing CPU.  The mode of code executed immediately after RS= M must be
+  detected, and the appropriate hook must be selected.  Always c= lear the auto
+  HALT restart flag if it is set.
+
+  @param[in]     CpuIndex    =              Th= e processor index for the currently
+            &n= bsp;            = ;            &n= bsp;    executing CPU.
+  @param[in,out] CpuState       &n= bsp;         Pointer to SMRAM Save = State Map for the
+            &n= bsp;            = ;            &n= bsp;    currently executing CPU.
+  @param[in]     NewInstructionPointer32  In= struction pointer to use if resuming to
+            &n= bsp;            = ;            &n= bsp;    32-bit mode from 64-bit SMM.
+  @param[in]     NewInstructionPointer  = ;  Instruction pointer to use if resuming to
+            &n= bsp;            = ;            &n= bsp;    same mode as SMM.
+
+  @retval The value of the original instruction pointer before it was= hooked.
+
+**/
+UINT64
+EFIAPI
+HookReturnFromSmm (
+  IN     UINTN      = ;           CpuIndex,
+  IN OUT SMRAM_SAVE_STATE_MAP  *CpuState,
+  IN     UINT64     &nbs= p;          NewInstructionPoin= ter32,
+  IN     UINT64     &nbs= p;          NewInstructionPoin= ter
+  )
+{
+  UINT64  OriginalInstructionPointer;
+
+  if (mSmmSaveStateRegisterLma =3D=3D EFI_MM_SAVE_STATE_REGISTER_LMA_= 32BIT) {
+    OriginalInstructionPointer =3D (UINT64)CpuState->x86= ._EIP;
+    CpuState->x86._EIP     &nbs= p;   =3D (UINT32)NewInstructionPointer;
+
+    //
+    // Clear the auto HALT restart flag so the RSM instruct= ion returns
+    // program control to the instruction following the HLT= instruction.
+    //
+    if ((CpuState->x86.AutoHALTRestart & BIT0) !=3D = 0) {
+      CpuState->x86.AutoHALTRestart &=3D ~= BIT0;
+    }
+  } else {
+    OriginalInstructionPointer =3D CpuState->x64._RIP; +    if ((CpuState->x64.IA32_EFER & LMA) =3D=3D 0) {<= br> +      CpuState->x64._RIP =3D (UINT32)NewInstru= ctionPointer32;
+    } else {
+      CpuState->x64._RIP =3D (UINT32)NewInstru= ctionPointer;
+    }
+
+    //
+    // Clear the auto HALT restart flag so the RSM instruct= ion returns
+    // program control to the instruction following the HLT= instruction.
+    //
+    if ((CpuState->x64.AutoHALTRestart & BIT0) !=3D = 0) {
+      CpuState->x64.AutoHALTRestart &=3D ~= BIT0;
+    }
+  }
+
+  return OriginalInstructionPointer;
+}
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.c b/UefiCpuP= kg/Library/SmmRelocationLib/X64/Semaphore.c
new file mode 100644
index 0000000000..54d3462ef8
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.c
@@ -0,0 +1,70 @@
+/** @file
+  Semaphore mechanism to indicate to the BSP that an AP has exited SM= M
+  after SMBASE relocation.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR&g= t;
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#include "InternalSmmRelocationLib.h"
+
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmRelocationOriginalAddressPtr32; +X86_ASSEMBLY_PATCH_LABEL  gPatchRebasedFlagAddr32;
+
+UINTN           &nb= sp; mSmmRelocationOriginalAddress;
+volatile BOOLEAN  *mRebasedFlag;
+
+/**
+AP Semaphore operation in 32-bit mode while BSP runs in 64-bit mode.
+**/
+VOID
+SmmRelocationSemaphoreComplete32 (
+  VOID
+  );
+
+/**
+  Hook return address of SMM Save State so that semaphore code
+  can be executed immediately after AP exits SMM to indicate to
+  the BSP that an AP has exited SMM after SMBASE relocation.
+
+  @param[in] CpuIndex     The processor index. +  @param[in] RebasedFlag  A pointer to a flag that is set to TRU= E
+            &n= bsp;            = ; immediately after AP exits SMM.
+
+**/
+VOID
+SemaphoreHook (
+  IN UINTN          = ;   CpuIndex,
+  IN volatile BOOLEAN  *RebasedFlag
+  )
+{
+  SMRAM_SAVE_STATE_MAP  *CpuState;
+  UINTN          &n= bsp;      TempValue;
+
+  mRebasedFlag =3D RebasedFlag;
+  PatchInstructionX86 (
+    gPatchRebasedFlagAddr32,
+    (UINT32)(UINTN)mRebasedFlag,
+    4
+    );
+
+  CpuState =3D (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + S= MRAM_SAVE_STATE_MAP_OFFSET);
+
+  mSmmRelocationOriginalAddress =3D HookReturnFromSmm (
+            &n= bsp;            = ;           CpuIndex,
+            &n= bsp;            = ;           CpuState,
+            &n= bsp;            = ;           (UINT64)(UINT= N)&SmmRelocationSemaphoreComplete32,
+            &n= bsp;            = ;           (UINT64)(UINT= N)&SmmRelocationSemaphoreComplete
+            &n= bsp;            = ;           );
+
+  //
+  // Use temp value to fix ICC compiler warning
+  //
+  TempValue =3D (UINTN)&mSmmRelocationOriginalAddress;
+  PatchInstructionX86 (
+    gPatchSmmRelocationOriginalAddressPtr32,
+    (UINT32)TempValue,
+    4
+    );
+}
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.nasm b/UefiCpu= Pkg/Library/SmmRelocationLib/X64/SmmInit.nasm
new file mode 100644
index 0000000000..ce4311fffd
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.nasm
@@ -0,0 +1,207 @@
+;-------------------------------------------------------------------------= ----- ;
+; Copyright (c) 2024, Intel Corporation. All rights reserved.<BR> +; SPDX-License-Identifier: BSD-2-Clause-Patent
+;
+; Module Name:
+;
+;   SmmInit.nasm
+;
+; Abstract:
+;
+;   Functions for relocating SMBASE's for all processors
+;
+;-------------------------------------------------------------------------= ------
+
+%include "StuffRsbNasm.inc"
+
+global  ASM_PFX(gcSmiIdtr)
+global  ASM_PFX(gcSmiGdtr)
+
+extern ASM_PFX(SmmInitHandler)
+extern ASM_PFX(mRebasedFlag)
+extern ASM_PFX(mSmmRelocationOriginalAddress)
+
+global ASM_PFX(gPatchSmmCr3)
+global ASM_PFX(gPatchSmmCr4)
+global ASM_PFX(gPatchSmmCr0)
+global ASM_PFX(gPatchSmmInitStack)
+global ASM_PFX(gcSmmInitSize)
+global ASM_PFX(gcSmmInitTemplate)
+global ASM_PFX(gPatchRebasedFlagAddr32)
+global ASM_PFX(gPatchSmmRelocationOriginalAddressPtr32)
+
+%define LONG_MODE_CS 0x38
+
+    SECTION .data
+
+NullSeg: DQ 0          &= nbsp;        ; reserved by architecture<= br> +CodeSeg32:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x9b
+            DB = ;     0xcf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+ProtModeCodeSeg32:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x9b
+            DB = ;     0xcf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+ProtModeSsSeg32:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x93
+            DB = ;     0xcf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+DataSeg32:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x93
+            DB = ;     0xcf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+CodeSeg16:
+            DW = ;     -1
+            DW = ;     0
+            DB = ;     0
+            DB = ;     0x9b
+            DB = ;     0x8f
+            DB = ;     0
+DataSeg16:
+            DW = ;     -1
+            DW = ;     0
+            DB = ;     0
+            DB = ;     0x93
+            DB = ;     0x8f
+            DB = ;     0
+CodeSeg64:
+            DW = ;     -1        = ;          ; LimitLow
+            DW = ;     0        =            ; BaseLow
+            DB = ;     0        =            ; BaseMid
+            DB = ;     0x9b
+            DB = ;     0xaf       &nb= sp;        ; LimitHigh
+            DB = ;     0        =            ; BaseHigh
+GDT_SIZE equ $ -   NullSeg
+
+ASM_PFX(gcSmiGdtr):
+    DW      GDT_SIZE - 1
+    DQ      NullSeg
+
+ASM_PFX(gcSmiIdtr):
+    DW      0
+    DQ      0
+
+
+    DEFAULT REL
+    SECTION .text
+
+global ASM_PFX(SmmStartup)
+
+BITS 16
+ASM_PFX(SmmStartup):
+    ;mov     eax, 0x80000001  = ;           ; read capabi= lity
+    ;cpuid
+    ;mov     ebx, edx   =             &nb= sp;    ; rdmsr will change edx. keep it in ebx.
+    mov     eax, strict dword 0 &n= bsp;       ; source operand will be patched +ASM_PFX(gPatchSmmCr3):
+    mov     cr3, eax
+o32 lgdt    [cs:ebp + (ASM_PFX(gcSmiGdtr) - ASM_PFX(SmmStar= tup))]
+    mov     eax, strict dword 0 &n= bsp;       ; source operand will be patched +ASM_PFX(gPatchSmmCr4):
+    or      ah,  2  = ;            &n= bsp;       ; enable XMM registers access
+    mov     cr4, eax
+    mov     ecx, 0xc0000080  =            ; IA32_EFER MS= R
+    rdmsr
+    or      ah, BIT0  &n= bsp;            = ;     ; set LME bit
+    ;test    ebx, BIT20   &nb= sp;            =   ; check NXE capability
+    ;jz      .1
+    ;or      ah, BIT3  &= nbsp;           &nbs= p;     ; set NXE bit
+;.1:
+    wrmsr
+    mov     eax, strict dword 0 &n= bsp;       ; source operand will be patched +ASM_PFX(gPatchSmmCr0):
+    mov     cr0, eax   &= nbsp;           &nbs= p;    ; enable protected mode & paging
+    jmp     LONG_MODE_CS : dword 0 = ;     ; offset will be patched to @LongMode
+@PatchLongModeOffset:
+
+BITS 64
+@LongMode:          &nbs= p;            &= nbsp;      ; long-mode starts here
+    mov     rsp, strict qword 0 &n= bsp;       ; source operand will be patched +ASM_PFX(gPatchSmmInitStack):
+    and     sp, 0xfff0   = ;            &n= bsp;  ; make sure RSP is 16-byte aligned
+    ;
+    ; According to X64 calling convention, XMM0~5 are volat= ile, we need to save
+    ; them before calling C-function.
+    ;
+    sub     rsp, 0x60
+    movdqa  [rsp], xmm0
+    movdqa  [rsp + 0x10], xmm1
+    movdqa  [rsp + 0x20], xmm2
+    movdqa  [rsp + 0x30], xmm3
+    movdqa  [rsp + 0x40], xmm4
+    movdqa  [rsp + 0x50], xmm5
+
+    add     rsp, -0x20
+    call    ASM_PFX(SmmInitHandler)
+    add     rsp, 0x20
+
+    ;
+    ; Restore XMM0~5 after calling C-function.
+    ;
+    movdqa  xmm0, [rsp]
+    movdqa  xmm1, [rsp + 0x10]
+    movdqa  xmm2, [rsp + 0x20]
+    movdqa  xmm3, [rsp + 0x30]
+    movdqa  xmm4, [rsp + 0x40]
+    movdqa  xmm5, [rsp + 0x50]
+
+    StuffRsb64
+    rsm
+
+BITS 16
+ASM_PFX(gcSmmInitTemplate):
+    mov ebp, [cs:@L1 - ASM_PFX(gcSmmInitTemplate) + 0x8000]=
+    sub ebp, 0x30000
+    jmp ebp
+@L1:
+    DQ     0; ASM_PFX(SmmStartup)
+
+ASM_PFX(gcSmmInitSize): DW $ - ASM_PFX(gcSmmInitTemplate)
+
+BITS 64
+global ASM_PFX(SmmRelocationSemaphoreComplete)
+ASM_PFX(SmmRelocationSemaphoreComplete):
+    push    rax
+    mov     rax, [ASM_PFX(mRebasedFlag)= ]
+    mov     byte [rax], 1
+    pop     rax
+    jmp     [ASM_PFX(mSmmRelocationOrig= inalAddress)]
+
+;
+; Semaphore code running in 32-bit mode
+;
+BITS 32
+global ASM_PFX(SmmRelocationSemaphoreComplete32)
+ASM_PFX(SmmRelocationSemaphoreComplete32):
+    push    eax
+    mov     eax, strict dword 0 &n= bsp;            = ;  ; source operand will be patched
+ASM_PFX(gPatchRebasedFlagAddr32):
+    mov     byte [eax], 1
+    pop     eax
+    jmp     dword [dword 0]  =             &nb= sp;     ; destination will be patched
+ASM_PFX(gPatchSmmRelocationOriginalAddressPtr32):
+
+BITS 64
+global ASM_PFX(SmmInitFixupAddress)
+ASM_PFX(SmmInitFixupAddress):
+    lea    rax, [@LongMode]
+    lea    rcx, [@PatchLongModeOffset - 6] +    mov    dword [rcx], eax
+
+    lea    rax, [ASM_PFX(SmmStartup)]
+    lea    rcx, [@L1]
+    mov    qword [rcx], rax
+    ret
--
2.16.2.windows.1

_._,_._,_

Groups.io Links:

=20 You receive all messages sent to this group. =20 =20

View/Reply Online (#117610) | =20 | Mute= This Topic | New Topic
Your Subscriptio= n | Contact Group Owner | Unsubscribe [rebecca@openfw.io]

_._,_._,_
--_000_MN6PR11MB824463D2573B81AE262F338E8C052MN6PR11MB8244namp_--