public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Ard Biesheuvel" <ardb@kernel.org>
To: Michael Kubacki <mikuback@linux.microsoft.com>
Cc: devel@edk2.groups.io, spbrogan@outlook.com,
	 Oliver Smith-Denny <osde@linux.microsoft.com>,
	Ray Ni <ray.ni@intel.com>,  Jiewen Yao <jiewen.yao@intel.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	 Taylor Beebe <t@taylorbeebe.com>,
	Oliver Smith-Denny <osd@smith-denny.com>,
	Dandan Bi <dandan.bi@intel.com>,  Dun Tan <dun.tan@intel.com>,
	Liming Gao <gaoliming@byosoft.com.cn>,
	 "Kinney, Michael D" <michael.d.kinney@intel.com>,
	Eric Dong <eric.dong@intel.com>,
	 Rahul Kumar <rahul1.kumar@intel.com>,
	Kun Qin <kuqin12@gmail.com>
Subject: Re: [edk2-devel] [PATCH 2/2] UefiCpuPkg/CpuMpPei X64: Reallocate page tables in permanent DRAM
Date: Fri, 9 Jun 2023 00:17:31 +0200	[thread overview]
Message-ID: <CAMj1kXERhsugzXND_YJTdCLmK7OkkNYjPo+0_F-NyX=bE2c4=w@mail.gmail.com> (raw)
In-Reply-To: <7f309bc4-5900-491e-c526-3ab04e23e947@linux.microsoft.com>

On Thu, 8 Jun 2023 at 21:32, Michael Kubacki
<mikuback@linux.microsoft.com> wrote:
>
> I think Sean's point aligns more closely with traditional PI boot phase
> separation goals. Btw, in the project we discussed, this issue was meant
> to capture the DXE memory protection init dependencies -
> https://github.com/tianocore/edk2/issues/4502 if someone would like to
> update that at some point.
>

Yeah, I think that makes sense. But the current status quo (on X64) is
to remove *all* protections when handing over to DXE, and so dropping
that and running with whatever PEI left behind is hardly going to be
worse. (and this is what we do on ARM)

But I agree that it makes sense for these manipulations to be scoped.
So we might manage a separate set of shadow page tables in the CPU
PEIM that also produces the memattr PPI, and manipulations can apply
to the active set only or to both sets (for, e.g., the DXE core, DXE
IPL and DXE mode stack).

That would at least permit use to drop the current kludge in DxeIpl,
which only knows how to create an unrestricted 1:1 mapping of the
entire address space, with 1x a NX mapping (for the stack) and 1x a
non-encrypted mapping (for the GHCB page on confidential VMs).

It also provides a better basis for architectures to carry their own
specific logic for this, instead of having a subdirectory for each
arch under DxeIpl/



> On 6/8/2023 2:27 PM, Sean wrote:
> > On 6/8/2023 10:48 AM, Ard Biesheuvel wrote:
> >> On Thu, 8 Jun 2023 at 19:39, Oliver Smith-Denny
> >> <osde@linux.microsoft.com> wrote:
> >>> On 6/8/2023 10:23 AM, Ard Biesheuvel wrote:
> >>>> Currently, we rely on the logic in DXE IPL to create new page tables
> >>>> from scratch when executing in X64 mode, which means that we run with
> >>>> the initial page tables all throughout PEI, and never enable
> >>>> protections
> >>>> such as the CPU stack guard, even though the logic is already in place
> >>>> for IA32.
> >>>>
> >>>> So let's enable the existing logic for X64 as well. This will permit us
> >>>> to apply stricter memory permissions to code and data allocations, as
> >>>> well as the stack, when executing in PEI. It also makes the DxeIpl
> >>>> logic
> >>>> redundant, and should allow us to make the PcdDxeIplBuildPageTables
> >>>> feature PCD limited to IA32 DxeIpl loading the x64 DXE core.
> >>>>
> >>>> When running in long mode, use the same logic that DxeIpl uses to
> >>>> determine the size of the address space, whether or not to use 1 GB
> >>>> leaf
> >>>> entries and whether or not to use 5 level paging. Note that in long
> >>>> mode, PEI is entered with paging enabled, and given that switching
> >>>> between 4 and 5 levels of paging is not currently supported without
> >>>> dropping out of 64-bit mode temporarily, all we can do is carry on
> >>>> without changing the number of levels.
> >>>>
> >>> I certainly agree with extending the ability to have memory protections
> >>> in PEI (and trying to unify across x86 and ARM (and beyond :)).
> >>>
> >>> A few things I am trying to understand:
> >>>
> >>> Does ARM today rebuild the page table in DxeIpl? Or is it using an
> >>> earlier built page table?
> >>>
> >> No. Most platforms run without any page tables until the permanent
> >> memory is installed, at which point it essentially maps what the
> >> platform describes as device memory and as normal memory.
> >>
> >>
> >>> If I understand your proposal correctly, with the addition of this
> >>> patch, you are suggesting we can drop creating new page tables in DxeIpl
> >>> and use only one page table throughout.
> >> Yes.
> >>
> >>> Again, I like the idea of having
> >>> mapped memory protections that continue through, but do you have
> >>> concerns that we may end up with garbage from PEI in DXE in the page
> >>> table? For OEMs, they may not control PEI and therefore be at the whim
> >>> of another's PEI page table. Would you envision the GCD gets built from
> >>> the existing page table or that the GCD gets built according to resource
> >>> descriptor HOBs and DxeCore ensures that the page table reflects what
> >>> the HOBs indicated?
> >>>
> >> If there is a reason to start with a clean slate when DxeIpl hands
> >> over to DXE core, I'd prefer that to be a conscious decision rather
> >> than a consequence of the X64 vs IA32 legacy.
> >>
> >> I think you can make a case for priming the GCD map based on resource
> >> descriptors rather than current mappings, with the exception of DXE
> >> core itself and the DXE mode stack. But I'd like to understand better
> >> what we think might be wrong with the page tables as PEI leaves them.
> >
> >
> > On many platforms there are different "owners" for these different parts
> > of firmware code.  The PEI phase is a place where the Silicon vendor and
> > Platform teams must work together.  The Dxe Phase may have a different
> > set of partners.  Industry trends definitely show more silicon vendor
> > driven diversity in the PEI phase of the boot process and with this
> > diversity it is much harder to make solid assumptions about the
> > execution environment.   We have also discussed in the past meeting that
> > PEI should be configurable using different solutions given it isn't a
> > place where unknown 3rd party compatibility is critical.  This means
> > that PEI might have different requirements than DXE and thus the
> > configuration inherited from PEI may not be compliant. Additionally, the
> > code and driver mappings from PEI phase should not be relevant in DXE.
> > Even with the same architecture being used these are different execution
> > phases with different constructs.  Keeping the PEI code mapped will only
> > lead to additional security and correctness challenges.  Finally, as an
> > overarching theme of this project we have suggested we should not be
> > coupling the various phases, their requirements, and their assumptions
> > together.  You could just as easily apply this to DXE and SMM/MM.  These
> > are all independent execution environments and the more we can provide
> > simplification and consistency the better our chances are of getting
> > correct implementations across the ecosystem.
> >
> >>
> >>
> >>
> >>
> >>
> >
> >
> > 
> >

  reply	other threads:[~2023-06-08 22:17 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-08 17:23 [PATCH 0/2] UefiCpuPkg/CpuMpPei X64: reallocate page tables in PEI Ard Biesheuvel
2023-06-08 17:23 ` [PATCH 1/2] UefiCpuPkg/CpuMpPei: Print correct buffer size used for page table Ard Biesheuvel
2023-06-08 19:25   ` [edk2-devel] " Michael Kubacki
2023-06-08 17:23 ` [PATCH 2/2] UefiCpuPkg/CpuMpPei X64: Reallocate page tables in permanent DRAM Ard Biesheuvel
2023-06-08 17:39   ` [edk2-devel] " Oliver Smith-Denny
2023-06-08 17:48     ` Ard Biesheuvel
2023-06-08 18:27       ` Sean
2023-06-08 19:32         ` Michael Kubacki
2023-06-08 22:17           ` Ard Biesheuvel [this message]
2023-06-08 19:38   ` Michael Kubacki
2023-06-09  0:24   ` Ni, Ray

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMj1kXERhsugzXND_YJTdCLmK7OkkNYjPo+0_F-NyX=bE2c4=w@mail.gmail.com' \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox