public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Anthony PERARD" <anthony.perard@citrix.com>
To: <devel@edk2.groups.io>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	<xen-devel@lists.xenproject.org>,
	Ard Biesheuvel <ardb+tianocore@kernel.org>,
	Jiewen Yao <jiewen.yao@intel.com>,
	Jordan Justen <jordan.l.justen@intel.com>,
	Gerd Hoffmann <kraxel@redhat.com>, Julien Grall <julien@xen.org>
Subject: [PATCH 1/1] OvmfPkg/XenPvBlkDxe: Fix memory barrier macro
Date: Tue, 19 Jul 2022 14:52:30 +0100	[thread overview]
Message-ID: <20220719135230.32838-1-anthony.perard@citrix.com> (raw)

From: Anthony PERARD <anthony.perard@citrix.com>

The macro "xen_mb()" needs to be a full memory barrier, that is it
needs to also prevent stores from been reorder after loads which an
x86 CPU can do (as I understand from reading [1]). So this patch makes
use of "mfence" instruction.

Currently, there's a good chance that OvmfXen hang in
XenPvBlockSync(), in an infinite loop, waiting for the last request to
be consumed by the backend. On the other hand, the backend didn't know
there were a new request and don't do anything. This is because there
is two ways the backend look for request, either it's working on one
and use RING_FINAL_CHECK_FOR_REQUESTS(), or it's waiting for an
event/notification. So the frontend (OvmfXen) doesn't need to send
a notification if the backend is already working, checking for needed
notification is done by RING_PUSH_REQUESTS_AND_CHECK_NOTIFY().

That last marco is where order of store vs load is important, the
macro first store the fact that there's a new request, then load the
value of the last event that the backend have done to check if an
asynchronous notification is needed. If those store and load are
reorder, OvmfXen could take the wrong decision of not sending a
notification and both sides just wait.

To fix this, we need to tell the CPU to not reorder stores after loads.

Aarch64 implementation of MemoryFence() is using "dmb sy" which seems
to prevent any reordering.

[1] https://en.wikipedia.org/wiki/Memory_ordering#Runtime_memory_ordering

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

I'm not sure what would be the correct implementation on MSFT,
_ReadWriteBarrier() seems to be only a compiler barrier, and I don't
know whether _mm_mfence() is just "mfence" or if it act as a compiler
barrier as well.

Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Julien Grall <julien@xen.org>
---
 OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf          |  8 ++++++
 OvmfPkg/XenPvBlkDxe/FullMemoryFence.h        | 27 ++++++++++++++++++++
 OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h            |  3 ++-
 OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c  | 20 +++++++++++++++
 OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c | 22 ++++++++++++++++
 5 files changed, 79 insertions(+), 1 deletion(-)
 create mode 100644 OvmfPkg/XenPvBlkDxe/FullMemoryFence.h
 create mode 100644 OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c
 create mode 100644 OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c

diff --git a/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf b/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf
index 5dd8e8be1183..dc91865265c1 100644
--- a/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf
+++ b/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.inf
@@ -30,9 +30,17 @@ [Sources]
   ComponentName.c
   ComponentName.h
   DriverBinding.h
+  FullMemoryFence.h
   XenPvBlkDxe.c
   XenPvBlkDxe.h
 
+[Sources.IA32]
+  X86GccFullMemoryFence.c | GCC
+  X86MsftFullMemoryFence.c | MSFT
+
+[Sources.X64]
+  X86GccFullMemoryFence.c | GCC
+  X86MsftFullMemoryFence.c | MSFT
 
 [LibraryClasses]
   UefiDriverEntryPoint
diff --git a/OvmfPkg/XenPvBlkDxe/FullMemoryFence.h b/OvmfPkg/XenPvBlkDxe/FullMemoryFence.h
new file mode 100644
index 000000000000..e3d1df3d0e9d
--- /dev/null
+++ b/OvmfPkg/XenPvBlkDxe/FullMemoryFence.h
@@ -0,0 +1,27 @@
+/** @file
+  Copyright (C) 2022, Citrix Ltd.
+
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+**/
+
+#if defined (MDE_CPU_IA32) || defined (MDE_CPU_X64)
+
+//
+// Like MemoryFence() but prevent stores from been reorded with loads by
+// the CPU on X64.
+//
+VOID
+EFIAPI
+FullMemoryFence (
+  VOID
+  );
+
+#else
+
+//
+// Only implement FullMemoryFence() on X86 as MemoryFence() is probably
+// fine on other platform.
+//
+#define FullMemoryFence()  MemoryFence()
+
+#endif
diff --git a/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h b/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h
index 350b7bd309c0..67ee1899e9a8 100644
--- a/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h
+++ b/OvmfPkg/XenPvBlkDxe/XenPvBlkDxe.h
@@ -11,8 +11,9 @@
 #define __EFI_XEN_PV_BLK_DXE_H__
 
 #include <Uefi.h>
+#include "FullMemoryFence.h"
 
-#define xen_mb()   MemoryFence()
+#define xen_mb()   FullMemoryFence()
 #define xen_rmb()  MemoryFence()
 #define xen_wmb()  MemoryFence()
 
diff --git a/OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c b/OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c
new file mode 100644
index 000000000000..92d107def470
--- /dev/null
+++ b/OvmfPkg/XenPvBlkDxe/X86GccFullMemoryFence.c
@@ -0,0 +1,20 @@
+/** @file
+  Copyright (C) 2022, Citrix Ltd.
+
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+**/
+
+#include "FullMemoryFence.h"
+
+//
+// Like MemoryFence() but prevent stores from been reorded with loads by
+// the CPU on X64.
+//
+VOID
+EFIAPI
+FullMemoryFence (
+  VOID
+  )
+{
+  __asm__ __volatile__ ("mfence":::"memory");
+}
diff --git a/OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c b/OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c
new file mode 100644
index 000000000000..fcb08f7601cd
--- /dev/null
+++ b/OvmfPkg/XenPvBlkDxe/X86MsftFullMemoryFence.c
@@ -0,0 +1,22 @@
+/** @file
+  Copyright (C) 2022, Citrix Ltd.
+
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+**/
+
+#include "FullMemoryFence.h"
+#include <intrin.h>
+
+//
+// Like MemoryFence() but prevent stores from been reorded with loads by
+// the CPU on X64.
+//
+VOID
+EFIAPI
+FullMemoryFence (
+  VOID
+  )
+{
+  _ReadWriteBarrier ();
+  _mm_mfence ();
+}
-- 
Anthony PERARD


             reply	other threads:[~2022-07-19 13:52 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-19 13:52 Anthony PERARD [this message]
2022-07-19 14:12 ` [PATCH 1/1] OvmfPkg/XenPvBlkDxe: Fix memory barrier macro Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220719135230.32838-1-anthony.perard@citrix.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox