public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Ni, Ruiyu" <ruiyu.ni@Intel.com>
To: Jian J Wang <jian.j.wang@intel.com>, edk2-devel@lists.01.org
Cc: Jiewen Yao <jiewen.yao@intel.com>,
	Eric Dong <eric.dong@intel.com>, Star Zeng <star.zeng@intel.com>
Subject: Re: [PATCH v3 1/2] MdeModulePkg/DxeIpl: Mark page table as read-only
Date: Wed, 6 Dec 2017 18:04:35 +0800	[thread overview]
Message-ID: <e3e4c57d-3b7c-6a56-72e0-fb07a5c0a0f5@Intel.com> (raw)
In-Reply-To: <20171205081604.11644-2-jian.j.wang@intel.com>

On 12/5/2017 4:16 PM, Jian J Wang wrote:
>> v3:
>>     Remove the public definition of PAGE_TABLE_POOL_HEADER but keep similar
>>     concept locally. CpuDxe has its own page table pool.
> 
>> v2:
>>     Introduce page table pool to ease the page table memory allocation and
>>     protection, which replaces the direct calling of AllocatePages().
> 
> This patch will set the memory pages used for page table as read-only
> memory after the paging is setup. CR0.WP must set to let it take into
> effect.
> 
> A simple page table memory management mechanism, page table pool concept,
> is introduced to simplify the page table memory allocation and protection.
> It will also help to reduce the potential recursive "split" action during
> updating memory paging attributes.
> 
> The basic idea is to allocate a bunch of continuous pages of memory in
> advance as one or more page table pools, and all future page tables
> consumption will happen in those pool instead of system memory. If the page
> pool is reserved at the boundary of 2MB page and with same size of 2MB page,
> there's no page granularity "split" operation will be needed, because the
> memory of new page tables (if needed) will be usually in the same page as
> target page table you're working on.
> 
> And since we have centralized page tables (a few 2MB pages), it's easier
> to protect them by changing their attributes to be read-only once and for
> all. There's no need to apply the protection for new page tables any more
> as long as the pool has free pages available.
> 
> Once current page table pool has been used up, one can allocate another 2MB
> memory pool and just set this new 2MB memory block to be read-only instead of
> setting the new page tables one page by one page.
> 
> Two new PCDs PcdPageTablePoolUnitSize and PcdPageTablePoolAlignment are used
> to specify the size and alignment for page table pool. For IA32 processor
> 0x200000 (2MB) is the only choice for both of them to meet the requirement of
> page table pool.
> 
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
>   MdeModulePkg/Core/DxeIplPeim/DxeIpl.h            |  34 +++
>   MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c  |   8 +-
>   MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c | 301 ++++++++++++++++++++++-
>   MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h |  26 ++
>   4 files changed, 365 insertions(+), 4 deletions(-)
> 
> diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.h b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.h
> index f3aabdb7e0..9dc80b1508 100644
> --- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.h
> +++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.h
> @@ -265,4 +265,38 @@ IsNullDetectionEnabled (
>     VOID
>     );
>   
> +/**
> +  Prevent the memory pages used for page table from been overwritten.
> +
> +  @param[in] PageTableBase    Base address of page table (CR3).
> +
> +**/
> +VOID
> +EnablePageTableProtection (
> +  IN  UINTN     PageTableBase,
> +  IN  BOOLEAN   Level4Paging
> +  );
> +
> +/**
> +  This API provides a way to allocate memory for page table.
> +
> +  This API can be called more than once to allocate memory for page tables.
> +
> +  Allocates the number of 4KB pages and returns a pointer to the allocated
> +  buffer. The buffer returned is aligned on a 4KB boundary.
> +
> +  If Pages is 0, then NULL is returned.
> +  If there is not enough memory remaining to satisfy the request, then NULL is
> +  returned.
> +
> +  @param  Pages                 The number of 4 KB pages to allocate.
> +
> +  @return A pointer to the allocated buffer or NULL if allocation fails.
> +
> +**/
> +VOID *
> +AllocatePageTableMemory (
> +  IN UINTN           Pages
> +  );
> +
>   #endif
> diff --git a/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c b/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c
> index 5649265367..13fff28e93 100644
> --- a/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c
> +++ b/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c
> @@ -99,7 +99,7 @@ Create4GPageTablesIa32Pae (
>     NumberOfPdpEntriesNeeded = (UINT32) LShiftU64 (1, (PhysicalAddressBits - 30));
>   
>     TotalPagesNum = NumberOfPdpEntriesNeeded + 1;
> -  PageAddress = (UINTN) AllocatePages (TotalPagesNum);
> +  PageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum);
>     ASSERT (PageAddress != 0);
>   
>     PageMap = (VOID *) PageAddress;
> @@ -149,6 +149,12 @@ Create4GPageTablesIa32Pae (
>         );
>     }
>   
> +  //
> +  // Protect the page table by marking the memory used for page table to be
> +  // read-only.
> +  //
> +  EnablePageTableProtection ((UINTN)PageMap, FALSE);
> +
>     return (UINTN) PageMap;
>   }
>   
> diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> index 29b6205e88..4ef3521224 100644
> --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> @@ -31,6 +31,11 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
>   #include "DxeIpl.h"
>   #include "VirtualMemory.h"
>   
> +//
> +// Global variable to keep track current available memory used as page table.
> +//
> +PAGE_TABLE_POOL   *mPageTablePool = NULL;
> +
>   /**
>     Clear legacy memory located at the first 4K-page, if available.
>   
> @@ -117,6 +122,112 @@ EnableExecuteDisableBit (
>     AsmWriteMsr64 (0xC0000080, MsrRegisters);
>   }
>   
> +/**
> +  Initialize a buffer pool for page table use only.
> +
> +  To reduce the potential split operation on page table, the pages reserved for
> +  page table should be allocated in the times of 512 (= SIZE_2MB) and at the
> +  boundary of SIZE_2MB. So the page pool is always initialized with number of
> +  pages greater than or equal to the given PoolPages.
> +
> +  Once the pages in the pool are used up, this method should be called again to
> +  reserve at least another 512 pages. But usually this won't happen in practice.
> +
> +  @param PoolPages  The least page number of the pool to be created.
> +
> +  @retval TRUE    The pool is initialized successfully.
> +  @retval FALSE   The memory is out of resource.
> +**/
> +BOOLEAN
> +InitializePageTablePool (
> +  IN UINTN           PoolPages
> +  )
> +{
> +  VOID          *Buffer;
> +
> +  //
> +  // Always reserve at least PAGE_TABLE_POOL_UNIT_PAGES (512).
> +  //
> +  if (PoolPages <= PAGE_TABLE_POOL_UNIT_PAGES) {
> +    PoolPages = PAGE_TABLE_POOL_UNIT_PAGES;
> +  } else {
> +    PoolPages = ((PoolPages + PAGE_TABLE_POOL_UNIT_PAGES - 1) /
> +                 PAGE_TABLE_POOL_UNIT_PAGES) * PAGE_TABLE_POOL_UNIT_PAGES;

What does the above calculation mean?
If PoolPages equals to 513, new PoolPages = ((513 + 512) % 512) * 512
= 512.

Maybe the logic is trying to make PoolPages multiple of 512?
If it's an internal function, just assert when PoolPages is not multiple
of 512 should be good enough.

> +  }
> +
> +  Buffer = AllocateAlignedPages (PoolPages, PAGE_TABLE_POOL_ALIGNMENT);

PAGE_TABLE_POOL_UNIT_PAGES and PAGE_TABLE_POOL_ALIGNMENT actually do
have some relationship between. Maybe just use
EFI_SIZE_TO_PAGES (SIZE_2MB) to replace PAGE_TABLE_POOL_UNIT_PAGES and
SIZE_2MB to replace PAGE_TABLE_POOL_ALIGNMENT?

> +  if (Buffer == NULL) {
> +    DEBUG ((DEBUG_ERROR, "ERROR: Out of aligned pages\r\n"));
> +    return FALSE;
> +  }
> +
> +  //
> +  // Link all pools into a list for easier track later.
> +  //
> +  if (mPageTablePool == NULL) {
> +    mPageTablePool = Buffer;
> +    mPageTablePool->NextPool = mPageTablePool;
> +  } else {
> +    ((PAGE_TABLE_POOL *)Buffer)->NextPool = mPageTablePool->NextPool;
> +    mPageTablePool->NextPool = Buffer;
> +    mPageTablePool = Buffer;
> +  }
> +
> +  //
> +  // Reserve one page for pool header.
> +  //
> +  mPageTablePool->FreePages  = PoolPages - 1;
> +  mPageTablePool->Offset = EFI_PAGES_TO_SIZE (1);
> +
> +  return TRUE;
> +}
> +
> +/**
> +  This API provides a way to allocate memory for page table.
> +
> +  This API can be called more than once to allocate memory for page tables.
> +
> +  Allocates the number of 4KB pages and returns a pointer to the allocated
> +  buffer. The buffer returned is aligned on a 4KB boundary.
> +
> +  If Pages is 0, then NULL is returned.
> +  If there is not enough memory remaining to satisfy the request, then NULL is
> +  returned.
> +
> +  @param  Pages                 The number of 4 KB pages to allocate.
> +
> +  @return A pointer to the allocated buffer or NULL if allocation fails.
> +
> +**/
> +VOID *
> +AllocatePageTableMemory (
> +  IN UINTN           Pages
> +  )
> +{
> +  VOID          *Buffer;
> +
> +  if (Pages == 0) {
> +    return NULL;
> +  }
> +
> +  //
> +  // Renew the pool if necessary.
> +  //
> +  if (mPageTablePool == NULL ||
> +      Pages > mPageTablePool->FreePages) {

Great to use FreePages to track the remaining free pages.
It's possible that when FreePages equals to 3, Pages
equals to 4, the 3 pages are wasted.
But I think it should be fine. Not a big waste of memory.

> +    if (!InitializePageTablePool (Pages)) {
> +      return NULL;
> +    }
> +  }
> +
> +  Buffer = (UINT8 *)mPageTablePool + mPageTablePool->Offset;
> +
> +  mPageTablePool->Offset     += EFI_PAGES_TO_SIZE (Pages);
> +  mPageTablePool->FreePages  -= Pages;
> +
> +  return Buffer;
> +}
> +
>   /**
>     Split 2M page to 4K.
>   
> @@ -144,7 +255,7 @@ Split2MPageTo4K (
>     //
>     AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
>   
> -  PageTableEntry = AllocatePages (1);
> +  PageTableEntry = AllocatePageTableMemory (1);
>     ASSERT (PageTableEntry != NULL);
>   
>     //
> @@ -204,7 +315,7 @@ Split1GPageTo2M (
>     //
>     AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
>   
> -  PageDirectoryEntry = AllocatePages (1);
> +  PageDirectoryEntry = AllocatePageTableMemory (1);
>     ASSERT (PageDirectoryEntry != NULL);
>   
>     //
> @@ -234,6 +345,184 @@ Split1GPageTo2M (
>     }
>   }
>   
> +/**
> +  Set one page of page table pool memory to be read-only.
> +
> +  @param[in] PageTableBase    Base address of page table (CR3).
> +  @param[in] Address          Start address of a page to be set as read-only.
> +  @param[in] Level4Paging     Level 4 paging flag.
> +
> +**/
> +VOID
> +SetPageTablePoolReadOnly (
> +  IN  UINTN                             PageTableBase,
> +  IN  EFI_PHYSICAL_ADDRESS              Address,
> +  IN  BOOLEAN                           Level4Paging
> +  )
> +{
> +  UINTN                 Index;
> +  UINTN                 EntryIndex;
> +  UINT64                AddressEncMask;
> +  EFI_PHYSICAL_ADDRESS  PhysicalAddress;
> +  UINT64                *PageTable;
> +  UINT64                *NewPageTable;
> +  UINT64                PageAttr;
> +  UINT64                LevelSize[5];
> +  UINT64                LevelMask[5];
> +  UINTN                 LevelShift[5];
> +  UINTN                 Level;
> +  UINT64                PoolUnitSize;
> +
> +  ASSERT (PageTableBase != 0);
> +
> +  //
> +  // Since the page table is always from page table pool, which is always
> +  // located at the boundary of PcdPageTablePoolAlignment, we just need to
> +  // set the whole pool unit to be read-only.
> +  //
> +  Address = Address & PAGE_TABLE_POOL_ALIGN_MASK;
> +
> +  LevelShift[1] = PAGING_L1_ADDRESS_SHIFT;
> +  LevelShift[2] = PAGING_L2_ADDRESS_SHIFT;
> +  LevelShift[3] = PAGING_L3_ADDRESS_SHIFT;
> +  LevelShift[4] = PAGING_L4_ADDRESS_SHIFT;
> +
> +  LevelMask[1] = PAGING_4K_ADDRESS_MASK_64;
> +  LevelMask[2] = PAGING_2M_ADDRESS_MASK_64;
> +  LevelMask[3] = PAGING_1G_ADDRESS_MASK_64;
> +  LevelMask[4] = PAGING_1G_ADDRESS_MASK_64;
> +
> +  LevelSize[1] = SIZE_4KB;
> +  LevelSize[2] = SIZE_2MB;
> +  LevelSize[3] = SIZE_1GB;
> +  LevelSize[4] = SIZE_512GB;
> +
> +  AddressEncMask  = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) &
> +                    PAGING_1G_ADDRESS_MASK_64;
> +  PageTable       = (UINT64 *)(UINTN)PageTableBase;
> +  PoolUnitSize    = PAGE_TABLE_POOL_UNIT_SIZE;
> +
> +  for (Level = (Level4Paging) ? 4 : 3; Level > 0; --Level) {
> +    Index = ((UINTN)RShiftU64 (Address, LevelShift[Level]));
> +    Index &= PAGING_PAE_INDEX_MASK;
> +
> +    PageAttr = PageTable[Index];
> +    if ((PageAttr & IA32_PG_PS) == 0) {
> +      //
> +      // Go to next level of table.
> +      //
> +      PageTable = (UINT64 *)(UINTN)(PageAttr & ~AddressEncMask &
> +                                    PAGING_4K_ADDRESS_MASK_64);
> +      continue;
> +    }
> +
> +    if (PoolUnitSize >= LevelSize[Level]) {
> +      //
> +      // Clear R/W bit if current page granularity is not larger than pool unit
> +      // size.
> +      //
> +      if ((PageAttr & IA32_PG_RW) != 0) {
> +        while (PoolUnitSize > 0) {
> +          //
> +          // PAGE_TABLE_POOL_UNIT_SIZE and PAGE_TABLE_POOL_ALIGNMENT are fit in
> +          // one page (2MB). Then we don't need to update attributes for pages
> +          // crossing page directory. ASSERT below is for that purpose.
> +          //
> +          ASSERT (Index < EFI_PAGE_SIZE/sizeof (UINT64));
> +
> +          PageTable[Index] &= ~(UINT64)IA32_PG_RW;
> +          PoolUnitSize    -= LevelSize[Level];
> +
> +          ++Index;
> +        }
> +      }
> +
> +      break;
> +
> +    } else {
> +      //
> +      // The smaller granularity of page must be needed.
> +      //
> +      NewPageTable = AllocatePageTableMemory (1);
> +      ASSERT (NewPageTable != NULL);
> +
> +      PhysicalAddress = PageAttr & LevelMask[Level];
> +      for (EntryIndex = 0;
> +            EntryIndex < EFI_PAGE_SIZE/sizeof (UINT64);
> +            ++EntryIndex) {
> +        NewPageTable[EntryIndex] = PhysicalAddress  | AddressEncMask |
> +                                   IA32_PG_P | IA32_PG_RW;
> +        if (Level > 1) {
> +          NewPageTable[EntryIndex] |= IA32_PG_PS;
> +        }
> +        PhysicalAddress += LevelSize[Level];
> +      }
> +
> +      PageTable[Index] = (UINT64)(UINTN)NewPageTable | AddressEncMask |
> +                                        IA32_PG_P | IA32_PG_RW;
> +      PageTable = NewPageTable;
> +    }
> +  }
> +}
> +
> +/**
> +  Prevent the memory pages used for page table from been overwritten.
> +
> +  @param[in] PageTableBase    Base address of page table (CR3).
> +  @param[in] Level4Paging     Level 4 paging flag.
> +
> +**/
> +VOID
> +EnablePageTableProtection (
> +  IN  UINTN     PageTableBase,
> +  IN  BOOLEAN   Level4Paging
> +  )
> +{
> +  PAGE_TABLE_POOL         *HeadPool;
> +  PAGE_TABLE_POOL         *Pool;
> +  UINT64                  PoolSize;
> +  EFI_PHYSICAL_ADDRESS    Address;
> +
> +  if (mPageTablePool == NULL) {
> +    return;
> +  }
> +
> +  //
> +  // Disable write protection, because we need to mark page table to be write
> +  // protected.
> +  //
> +  AsmWriteCr0 (AsmReadCr0() & ~CR0_WP);
> +
> +  //
> +  // SetPageTablePoolReadOnly might update mPageTablePool. It's safer to
> +  // remember original one in advance.
> +  //
> +  HeadPool = mPageTablePool;
> +  Pool = HeadPool;
> +  do {
> +    Address  = (EFI_PHYSICAL_ADDRESS)(UINTN)Pool;
> +    PoolSize = Pool->Offset + EFI_PAGES_TO_SIZE (Pool->FreePages);
> +
> +    //
> +    // The size of one pool must be multiple of PAGE_TABLE_POOL_UNIT_SIZE, which
> +    // is one of page size of the processor (2MB by default). Let's apply the
> +    // protection to them one by one.
> +    //
> +    while (PoolSize > 0) {
> +      SetPageTablePoolReadOnly(PageTableBase, Address, Level4Paging);
> +      Address   += PAGE_TABLE_POOL_UNIT_SIZE;
> +      PoolSize  -= PAGE_TABLE_POOL_UNIT_SIZE;
> +    }
> +
> +    Pool = Pool->NextPool;
> +  } while (Pool != HeadPool);
> +
> +  //
> +  // Enable write protection, after page table attribute updated.
> +  //
> +  AsmWriteCr0 (AsmReadCr0() | CR0_WP);
> +}
> +
>   /**
>     Allocates and fills in the Page Directory and Page Table Entries to
>     establish a 1:1 Virtual to Physical mapping.
> @@ -329,7 +618,7 @@ CreateIdentityMappingPageTables (
>     } else {
>       TotalPagesNum = NumberOfPml4EntriesNeeded + 1;
>     }
> -  BigPageAddress = (UINTN) AllocatePages (TotalPagesNum);
> +  BigPageAddress = (UINTN) AllocatePageTableMemory (TotalPagesNum);
>     ASSERT (BigPageAddress != 0);
>   
>     //
> @@ -430,6 +719,12 @@ CreateIdentityMappingPageTables (
>         );
>     }
>   
> +  //
> +  // Protect the page table by marking the memory used for page table to be
> +  // read-only.
> +  //
> +  EnablePageTableProtection ((UINTN)PageMap, TRUE);
> +
>     if (PcdGetBool (PcdSetNxForStack)) {
>       EnableExecuteDisableBit ();
>     }
> diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h
> index 7c9bb49e3e..9f14ac6007 100644
> --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h
> +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h
> @@ -148,11 +148,37 @@ typedef union {
>   
>   #pragma pack()
>   
> +#define CR0_WP                      BIT16
> +
>   #define IA32_PG_P                   BIT0
>   #define IA32_PG_RW                  BIT1
> +#define IA32_PG_PS                  BIT7
> +
> +#define PAGING_PAE_INDEX_MASK       0x1FF
>   
> +#define PAGING_4K_ADDRESS_MASK_64   0x000FFFFFFFFFF000ull
> +#define PAGING_2M_ADDRESS_MASK_64   0x000FFFFFFFE00000ull
>   #define PAGING_1G_ADDRESS_MASK_64   0x000FFFFFC0000000ull
>   
> +#define PAGING_L1_ADDRESS_SHIFT     12
> +#define PAGING_L2_ADDRESS_SHIFT     21
> +#define PAGING_L3_ADDRESS_SHIFT     30
> +#define PAGING_L4_ADDRESS_SHIFT     39
> +
> +#define PAGING_PML4E_NUMBER         4
> +
> +#define PAGE_TABLE_POOL_ALIGNMENT   BASE_2MB
> +#define PAGE_TABLE_POOL_UNIT_SIZE   SIZE_2MB
> +#define PAGE_TABLE_POOL_UNIT_PAGES  EFI_SIZE_TO_PAGES (SIZE_2MB)
> +#define PAGE_TABLE_POOL_ALIGN_MASK  \
> +  (~(EFI_PHYSICAL_ADDRESS)(PAGE_TABLE_POOL_ALIGNMENT - 1))
> +
> +typedef struct {
> +  VOID            *NextPool;
> +  UINTN           Offset;
> +  UINTN           FreePages;
> +} PAGE_TABLE_POOL;
> +
>   /**
>     Enable Execute Disable Bit.
>   
> 


-- 
Thanks,
Ray


  reply	other threads:[~2017-12-06 10:00 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-05  8:16 [PATCH v3 0/2] Enable page table write protection Jian J Wang
2017-12-05  8:16 ` [PATCH v3 1/2] MdeModulePkg/DxeIpl: Mark page table as read-only Jian J Wang
2017-12-06 10:04   ` Ni, Ruiyu [this message]
2017-12-06 13:05     ` Wang, Jian J
2017-12-05  8:16 ` [PATCH v3 2/2] UefiCpuPkg/CpuDxe: Enable protection for newly added page table Jian J Wang
2017-12-05 20:05 ` [PATCH v3 0/2] Enable page table write protection Laszlo Ersek
2017-12-06  0:15   ` Wang, Jian J
2017-12-06  7:11 ` Yao, Jiewen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e3e4c57d-3b7c-6a56-72e0-fb07a5c0a0f5@Intel.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox