public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Yao, Jiewen" <jiewen.yao@intel.com>
To: "Kinney, Michael D" <michael.d.kinney@intel.com>,
	Laszlo Ersek <lersek@redhat.com>,
	"Fan, Jeff" <jeff.fan@intel.com>
Cc: edk2-devel-01 <edk2-devel@lists.01.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: SMRAM sizes on large hosts
Date: Wed, 3 May 2017 01:20:02 +0000	[thread overview]
Message-ID: <74D8A39837DF1E4DA445A8C0B3885C503A936DEF@shsmsx102.ccr.corp.intel.com> (raw)
In-Reply-To: <E92EE9817A31E24EB0585FDF735412F57D1700F8@ORSMSX113.amr.corp.intel.com>

Yes, increasing TSEG might be an easy way.

I have seen a physical board using 16M TSEG or even 32M TSEG on a large memory and many core system.

Thank you
Yao Jiewen

From: Kinney, Michael D
Sent: Wednesday, May 3, 2017 4:49 AM
To: Laszlo Ersek <lersek@redhat.com>; Fan, Jeff <jeff.fan@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; Kinney, Michael D <michael.d.kinney@intel.com>
Cc: edk2-devel-01 <edk2-devel@lists.01.org>; Gerd Hoffmann <kraxel@redhat.com>; Paolo Bonzini <pbonzini@redhat.com>
Subject: RE: SMRAM sizes on large hosts

Laszlo,

Is it possible to add more TSEG sizes to the Q35 board?

There may be things we can do to reduce the per CPU SMRAM
overhead, but those will likely take some time, be more
complex, and require significant validation.  And...we may
run into the same issue again if there continues to be
requirements to increase the number of VCPUs.

Thanks,

Mike

> -----Original Message-----
> From: Laszlo Ersek [mailto:lersek@redhat.com]
> Sent: Tuesday, May 2, 2017 11:16 AM
> To: Fan, Jeff <jeff.fan@intel.com<mailto:jeff.fan@intel.com>>; Kinney, Michael D <michael.d.kinney@intel.com<mailto:michael.d.kinney@intel.com>>;
> Yao, Jiewen <jiewen.yao@intel.com<mailto:jiewen.yao@intel.com>>
> Cc: edk2-devel-01 <edk2-devel@lists.01.org<mailto:edk2-devel@lists.01.org>>; Gerd Hoffmann <kraxel@redhat.com<mailto:kraxel@redhat.com>>; Paolo
> Bonzini <pbonzini@redhat.com<mailto:pbonzini@redhat.com>>
> Subject: SMRAM sizes on large hosts
>
> Hi All,
>
> in your experience, how much SMRAM do "big hosts" provide? (Machines
> that have, say, ~300 CPU cores.)
>
> With QEMU's Q35 board, which provides 8MB of SMRAM (TSEG), we're hitting
> various out-of-SMRAM conditions with OVMF at around 230-240 VCPUs. We'd
> like to go to a higher VCPU count than that.
>
> * So, in your experience, how much SMRAM do physical boards, that carry
> such a high number of cores, provide?
>
> * Perhaps we'll have to do something about the SMRAM size on QEMU in the
> longer term, but until then, can you guys recommend various "cheap
> tricks" to decrease per-VCPU SMRAM usage?
>
> For example, in OVMF we have a 16KB SMM stack per VCPU, and we also
> enable the SMM stack overflow guard page -- we had been hit by an SMM
> stack overflow with the original 8KB stack size, and so we increased
> both the stack size and enabled the guard page; see commits
>
>   509f8425b75d UefiCpuPkg: change PcdCpuSmmStackGuard default to TRUE
>   0d0c245dfb14 OvmfPkg: set SMM stack size to 16KB
>
> I've now tried to decrease the stack size to the "middle point" 12KB.
> That stack size does not reproduce the SMM stack overflow seen
> originally, but it also doesn't help with the SMRAM exhaustion -- we
> cannot go to any higher VCPU count with it.
>
> Are there any other "tweakables" (PCDs) we could massage to see the
> per-VCPU SMRAM usage go down?
>
> Here's a (somewhat indiscriminate) list of PCDs, from the OVMF Ia32X64
> build report file, where each PCD's name contains "smm":
>
>     PcdCpuSmmBlockStartupThisAp                 :   FLAG  (BOOLEAN) = 0
>     PcdCpuSmmDebug                              :   FLAG  (BOOLEAN) = 0
>     PcdCpuSmmFeatureControlMsrLock              :   FLAG  (BOOLEAN) = 1
>     PcdCpuSmmProfileEnable                      :   FLAG  (BOOLEAN) = 0
>     PcdCpuSmmProfileRingBuffer                  :   FLAG  (BOOLEAN) = 0
>     PcdCpuSmmStackGuard                         :   FLAG  (BOOLEAN) = 1
>  *P PcdCpuSmmEnableBspElection                  :   FLAG  (BOOLEAN) = 0
>  *P PcdSmmSmramRequire                          :   FLAG  (BOOLEAN) = 1
>
>     PcdCpuSmmCodeAccessCheckEnable              :  FIXED  (BOOLEAN) = 1
>     PcdCpuSmmProfileSize                        :  FIXED   (UINT32) = 0x200000
>     PcdCpuSmmStaticPageTable                    :  FIXED  (BOOLEAN) = 1
>  *P PcdCpuSmmStackSize                          :  FIXED   (UINT32) = 0x4000
>
>     PcdLoadFixAddressSmmCodePageNumber          :  PATCH   (UINT32) = 0
>
>     PcdS3BootScriptTablePrivateSmmDataPtr       :    DYN   (UINT64) = 0x0
>  *P PcdCpuSmmApSyncTimeout                      :    DYN   (UINT64) = 100000
>  *P PcdCpuSmmSyncMode                           :    DYN    (UINT8) = 0x01
>
> (BTW, security features should not be disabled, even if they saved some
> SMRAM.)
>
> Thank you,
> Laszlo

  reply	other threads:[~2017-05-03  1:20 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-02 18:16 SMRAM sizes on large hosts Laszlo Ersek
2017-05-02 20:49 ` Kinney, Michael D
2017-05-03  1:20   ` Yao, Jiewen [this message]
2017-05-03  6:57   ` Gerd Hoffmann
2017-05-03 12:56     ` Paolo Bonzini
2017-05-03 13:14       ` Laszlo Ersek
2017-05-03 13:26         ` Paolo Bonzini
2017-05-03 13:35           ` Laszlo Ersek
2017-05-03 13:55             ` Paolo Bonzini
2017-05-03 22:34               ` Laszlo Ersek
2017-05-03 12:58     ` Laszlo Ersek
2017-05-03 13:44       ` Gerd Hoffmann
2017-05-03 22:33         ` Laszlo Ersek
2017-05-03 23:36           ` Laszlo Ersek
2017-05-04  6:18             ` Gerd Hoffmann
2017-05-04 14:52             ` Gerd Hoffmann
2017-05-04 15:21               ` Laszlo Ersek
2017-05-04  8:23           ` Igor Mammedov
2017-05-04 11:34             ` Laszlo Ersek
2017-05-04 14:00               ` Igor Mammedov
2017-05-04 14:41                 ` Gerd Hoffmann
2017-05-04 14:50                   ` Igor Mammedov
2017-05-04 15:19                     ` Laszlo Ersek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=74D8A39837DF1E4DA445A8C0B3885C503A936DEF@shsmsx102.ccr.corp.intel.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox