From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.81]) by mx.groups.io with SMTP id smtpd.web10.18390.1596202219850048201 for ; Fri, 31 Jul 2020 06:30:20 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=iktNNYzl; spf=pass (domain: redhat.com, ip: 207.211.31.81, mailfrom: lersek@redhat.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1596202219; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/cW/U/7zCNImG0j0utrB+R0t3fwIL/Z4EQvtmFDkhqs=; b=iktNNYzlqyYXhx8Y43sdAWqCPEoVQoA3gPwh5t+slEjCXBeROIr25jXx1WOaZqdcVrdeha qRrUBOLfbhYptFqxNqrEg0ncKkXdd7g4/5Q0cASNSM0tnfnE6QG3+5fNQplsmG8drl/pQy UicWMj8pqIXljto8eLv1F5z0e1Hr4h8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-263-U5cf8sKzMtauLvIM3UnkdQ-1; Fri, 31 Jul 2020 09:30:13 -0400 X-MC-Unique: U5cf8sKzMtauLvIM3UnkdQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BCC8779EC4; Fri, 31 Jul 2020 13:30:11 +0000 (UTC) Received: from lacos-laptop-7.usersys.redhat.com (ovpn-114-160.ams2.redhat.com [10.36.114.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6433E7C0ED; Fri, 31 Jul 2020 13:30:10 +0000 (UTC) Subject: Re: [edk2-devel] [PATCH] UefiCpuPkg/PiSmmCpuDxeSmm: pause in WaitForSemaphore() before re-fetch To: devel@edk2.groups.io, eric.dong@intel.com Cc: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= , "Kumar, Rahul1" , "Ni, Ray" References: <20200729185217.10084-1-lersek@redhat.com> From: "Laszlo Ersek" Message-ID: Date: Fri, 31 Jul 2020 15:30:09 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit On 07/31/20 03:10, Dong, Eric wrote: > Reviewed-by: Eric Dong Thank you, merged as commit 9001b750df64, via . Laszlo >> -----Original Message----- >> From: Laszlo Ersek >> Sent: Thursday, July 30, 2020 2:52 AM >> To: edk2-devel-groups-io >> Cc: Dong, Eric ; Philippe Mathieu-Daudé >> ; Kumar, Rahul1 ; Ni, Ray >> >> Subject: [PATCH] UefiCpuPkg/PiSmmCpuDxeSmm: pause in >> WaitForSemaphore() before re-fetch >> >> Most busy waits (spinlocks) in >> "UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c" >> already call CpuPause() in their loop bodies; see SmmWaitForApArrival(), >> APHandler(), and SmiRendezvous(). However, the "main wait" within >> APHandler(): >> >>> // >>> // Wait for something to happen >>> // >>> WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run); >> >> doesn't do so, as WaitForSemaphore() keeps trying to acquire the >> semaphore without pausing. >> >> The performance impact is especially notable in QEMU/KVM + OVMF >> virtualization with CPU overcommit (that is, when the guest has significantly >> more VCPUs than the host has physical CPUs). The guest BSP is working >> heavily in: >> >> BSPHandler() [MpService.c] >> PerformRemainingTasks() [PiSmmCpuDxeSmm.c] >> SetUefiMemMapAttributes() [SmmCpuMemoryManagement.c] >> >> while the many guest APs are spinning in the "Wait for something to happen" >> semaphore acquisition, in APHandler(). The guest APs are generating useless >> memory traffic and saturating host CPUs, hindering the guest BSP's progress >> in SetUefiMemMapAttributes(). >> >> Rework the loop in WaitForSemaphore(): call CpuPause() in every iteration >> after the first check fails. Due to Pause Loop Exiting (known as Pause Filter on >> AMD), the host scheduler can favor the guest BSP over the guest APs. >> >> Running a 16 GB RAM + 512 VCPU guest on a 448 PCPU host, this patch >> reduces OVMF boot time (counted until reaching grub) from 20-30 minutes >> to less than 4 minutes. >> >> The patch should benefit physical machines as well -- according to the Intel >> SDM, PAUSE "Improves the performance of spin-wait loops". Adding PAUSE >> to the generic WaitForSemaphore() function is considered a general >> improvement. >> >> Cc: Eric Dong >> Cc: Philippe Mathieu-Daudé >> Cc: Rahul Kumar >> Cc: Ray Ni >> Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1861718 >> Signed-off-by: Laszlo Ersek >> --- >> >> Notes: >> Repo: https://pagure.io/lersek/edk2.git >> Branch: sem_wait_pause_rhbz1861718 >> >> UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 18 +++++++++++------- >> 1 file changed, 11 insertions(+), 7 deletions(-) >> >> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c >> b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c >> index 57e788c01b1f..4bcd217917d7 100644 >> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c >> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c >> @@ -40,14 +40,18 @@ WaitForSemaphore ( >> { >> UINT32 Value; >> >> - do { >> + for (;;) { >> Value = *Sem; >> - } while (Value == 0 || >> - InterlockedCompareExchange32 ( >> - (UINT32*)Sem, >> - Value, >> - Value - 1 >> - ) != Value); >> + if (Value != 0 && >> + InterlockedCompareExchange32 ( >> + (UINT32*)Sem, >> + Value, >> + Value - 1 >> + ) == Value) { >> + break; >> + } >> + CpuPause (); >> + } >> return Value - 1; >> } >> >> -- >> 2.19.1.3.g30247aa5d201 > > >