public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Igor Mammedov" <imammedo@redhat.com>
To: "Laszlo Ersek" <lersek@redhat.com>
Cc: devel@edk2.groups.io, qemu-devel@nongnu.org,
	yingwen.chen@intel.com, phillip.goerl@oracle.com,
	alex.williamson@redhat.com, jiewen.yao@intel.com,
	jun.nakajima@intel.com, michael.d.kinney@intel.com,
	pbonzini@redhat.com, boris.ostrovsky@oracle.com,
	rfc@edk2.groups.io, joao.m.martins@oracle.com,
	Brijesh Singh <brijesh.singh@amd.com>
Subject: Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address
Date: Tue, 24 Sep 2019 13:19:36 +0200	[thread overview]
Message-ID: <20190924131936.7dec5e6c@redhat.com> (raw)
In-Reply-To: <c18f1ada-3eca-d5f1-da4f-e74181c5a862@redhat.com>

On Mon, 23 Sep 2019 20:35:02 +0200
"Laszlo Ersek" <lersek@redhat.com> wrote:

> On 09/20/19 11:28, Laszlo Ersek wrote:
> > On 09/20/19 10:28, Igor Mammedov wrote:  
> >> On Thu, 19 Sep 2019 19:02:07 +0200
> >> "Laszlo Ersek" <lersek@redhat.com> wrote:
> >>  
> >>> Hi Igor,
> >>>
> >>> (+Brijesh)
> >>>
> >>> long-ish pondering ahead, with a question at the end.  
> >> [...]
> >>  
> >>> Finally: can you please remind me why we lock down 128KB (32 pages) at
> >>> 0x3_0000, and not just half of that? What do we need the range at
> >>> [0x4_0000..0x4_FFFF] for?  
> >>
> >>
> >> If I recall correctly, CPU consumes 64K of save/restore area.
> >> The rest 64K are temporary RAM for using in SMI relocation handler,
> >> if it's possible to get away without it then we can drop it and
> >> lock only 64K required for CPU state. It won't help with SEV
> >> conflict though as it's in the first 64K.  
> > 
> > OK. Let's go with 128KB for now. Shrinking the area is always easier
> > than growing it.
> >   
> >> On QEMU side,  we can drop black-hole approach and allocate
> >> dedicated SMRAM region, which explicitly gets mapped into
> >> RAM address space and after SMI hanlder initialization, gets
> >> unmapped (locked). So that SMRAM would be accessible only
> >> from SMM context. That way RAM at 0x30000 could be used as
> >> normal when SMRAM is unmapped.  
> > 
> > I prefer the black-hole approach, introduced in your current patch
> > series, if it can work. Way less opportunity for confusion.
> > 
> > I've started work on the counterpart OVMF patches; I'll report back.  
> 
> I've got good results. For this (1/2) QEMU patch:
> 
> Tested-by: Laszlo Ersek <lersek@redhat.com>
> 
> I tested the following scenarios. In every case, I verified the OVMF
> log, and also the "info mtree" monitor command's result (i.e. whether
> "smbase-blackhole" / "smbase-window" were disabled or enabled). Mostly,
> I diffed these text files between the test scenarios (looking for
> desired / undesired differences). In the Linux guests, I checked /
> compared the dmesg too (wrt. the UEFI memmap).
> 
> - unpatched OVMF (regression test), Fedora guest, normal boot and S3
> 
> - patched OVMF, but feature disabled with "-global mch.smbase-smram=off"
> (another regression test), Fedora guest, normal boot and S3
> 
> - patched OVMF, feature enabled, Fedora and various Windows guests
> (win7, win8, win10 families, client/server), normal boot and S3
> 
> - a subset of the above guests, with S3 disabled (-global
>   ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
> 
> SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
> for that now):
> 
> - unpatched OVMF (regression test), normal boot
> 
> - patched OVMF but feature disabled on the QEMU cmdline (another
> regression test), normal boot
> 
> - patched OVMF, feature enabled, normal boot.
> 
> I plan to post the OVMF patches tomorrow, for discussion.
> 
> (It's likely too early to push these QEMU / edk2 patches right now -- we
> don't know yet if this path will take us to the destination. For now, it
> certainly looks great.)

Laszlo, thanks for trying it out.
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.

Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)

> Thanks
> Laszlo
> 
> 
> 


  reply	other threads:[~2019-09-24 11:19 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-17 13:07 [PATCH 0/2] q35: mch: allow to lock down 128K RAM at default SMBASE address Igor Mammedov
2019-09-17 13:07 ` [PATCH 1/2] q35: implement 128K SMRAM " Igor Mammedov
2019-09-19 17:02   ` [Qemu-devel] " Laszlo Ersek
2019-09-20  8:28     ` [edk2-devel] " Igor Mammedov
2019-09-20  9:28       ` Laszlo Ersek
2019-09-23 18:35         ` Laszlo Ersek
2019-09-24 11:19           ` Igor Mammedov [this message]
2019-09-30 11:51             ` Laszlo Ersek
2019-09-30 12:36               ` Igor Mammedov
2019-09-30 14:22                 ` Yao, Jiewen
2019-10-01 18:03                   ` Laszlo Ersek
2019-10-04 11:31                     ` Igor Mammedov
2019-10-07  9:44                       ` Laszlo Ersek
2019-09-24 11:47         ` Paolo Bonzini
2019-09-17 13:07 ` [PATCH 2/2] tests: q35: MCH: add default SMBASE SMRAM lock test Igor Mammedov
2019-09-17 15:23 ` [edk2-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM at default SMBASE address no-reply
2019-09-17 15:24 ` no-reply

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190924131936.7dec5e6c@redhat.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox