From: "Wu, Hao A" <hao.a.wu@intel.com>
To: "Ni, Ruiyu" <ruiyu.ni@intel.com>,
"edk2-devel@lists.01.org" <edk2-devel@lists.01.org>
Cc: "Zeng, Star" <star.zeng@intel.com>,
"Dong, Eric" <eric.dong@intel.com>,
"Yao, Jiewen" <jiewen.yao@intel.com>
Subject: Re: [PATCH 2/4] MdeModulePkg/NvmExpressPei: Add the NVME device PEI BlockIo support
Date: Thu, 21 Jun 2018 08:31:25 +0000 [thread overview]
Message-ID: <B80AF82E9BFB8E4FBD8C89DA810C6A0931E0A2F3@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <734D49CCEBEEF84792F5B80ED585239D5BD40C10@SHSMSX104.ccr.corp.intel.com>
> -----Original Message-----
> From: Ni, Ruiyu
> Sent: Thursday, June 21, 2018 3:46 PM
> To: Wu, Hao A; edk2-devel@lists.01.org
> Cc: Zeng, Star; Dong, Eric; Yao, Jiewen
> Subject: RE: [PATCH 2/4] MdeModulePkg/NvmExpressPei: Add the NVME
> device PEI BlockIo support
>
> 2 minor comments.
Ray,
Thanks for the feedbacks.
I will propose a V2 version of the series according to your comments.
Best Regards,
Hao Wu
>
> Thanks/Ray
>
> > -----Original Message-----
> > From: Wu, Hao A
> > Sent: Friday, June 15, 2018 3:04 PM
> > To: edk2-devel@lists.01.org
> > Cc: Wu, Hao A <hao.a.wu@intel.com>; Zeng, Star <star.zeng@intel.com>;
> > Dong, Eric <eric.dong@intel.com>; Ni, Ruiyu <ruiyu.ni@intel.com>; Yao,
> > Jiewen <jiewen.yao@intel.com>
> > Subject: [PATCH 2/4] MdeModulePkg/NvmExpressPei: Add the NVME
> > device PEI BlockIo support
> >
> > REF: https://bugzilla.tianocore.org/show_bug.cgi?id=256
> >
> > This commit adds the PEI BlockIo support for NVM Express devices.
> >
> > The driver will consume the EDKII_NVM_EXPRESS_HOST_CONTROLLER_PPI
> > for NVM
> > Express host controllers within the system. And then produces the
> > BlockIo(2) PPIs for each controller.
> >
> > The implementation of this driver is currently based on the NVM Express 1.1
> > Specification, which is available at:
> > http://nvmexpress.org/resources/specifications/
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Eric Dong <eric.dong@intel.com>
> > Cc: Ruiyu Ni <ruiyu.ni@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Hao Wu <hao.a.wu@intel.com>
> > ---
> > MdeModulePkg/Bus/Pci/NvmExpressPei/DmaMem.c | 249
> +++++++
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.c | 368
> > ++++++++++
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.h | 265
> > +++++++
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.inf | 70 ++
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.uni | 21 +
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiBlockIo.c | 531
> > ++++++++++++++
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiBlockIo.h | 266
> > +++++++
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiExtra.uni | 19 +
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiHci.c | 748
> > ++++++++++++++++++++
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiHci.h | 166
> > +++++
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiPassThru.c | 628
> > ++++++++++++++++
> > MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiPassThru.h | 107
> > +++
> > MdeModulePkg/MdeModulePkg.dsc | 1 +
> > 13 files changed, 3439 insertions(+)
> >
> > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressPei/DmaMem.c
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/DmaMem.c
> > new file mode 100644
> > index 0000000000..51b48d38dd
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/DmaMem.c
> > @@ -0,0 +1,249 @@
> > +/** @file
> > + The DMA memory help function.
> > +
> > + Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions
> > + of the BSD License which accompanies this distribution. The
> > + full text of the license may be found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#include "NvmExpressPei.h"
> > +
> > +EDKII_IOMMU_PPI *mIoMmu;
> > +
> > +/**
> > + Provides the controller-specific addresses required to access system
> > memory from a
> > + DMA bus master.
> > +
> > + @param Operation Indicates if the bus master is going to read or
> > write to system memory.
> > + @param HostAddress The system memory address to map to the
> PCI
> > controller.
> > + @param NumberOfBytes On input the number of bytes to map. On
> > output the number of bytes
> > + that were mapped.
> > + @param DeviceAddress The resulting map address for the bus
> master
> > PCI controller to use to
> > + access the hosts HostAddress.
> > + @param Mapping A resulting value to pass to Unmap().
> > +
> > + @retval EFI_SUCCESS The range was mapped for the returned
> > NumberOfBytes.
> > + @retval EFI_UNSUPPORTED The HostAddress cannot be mapped as a
> > common buffer.
> > + @retval EFI_INVALID_PARAMETER One or more parameters are invalid.
> > + @retval EFI_OUT_OF_RESOURCES The request could not be completed
> > due to a lack of resources.
> > + @retval EFI_DEVICE_ERROR The system hardware could not map the
> > requested address.
> > +
> > +**/
> > +EFI_STATUS
> > +IoMmuMap (
> > + IN EDKII_IOMMU_OPERATION Operation,
> > + IN VOID *HostAddress,
> > + IN OUT UINTN *NumberOfBytes,
> > + OUT EFI_PHYSICAL_ADDRESS *DeviceAddress,
> > + OUT VOID **Mapping
> > + )
> > +{
> > + EFI_STATUS Status;
> > + UINT64 Attribute;
> > +
> > + if (mIoMmu != NULL) {
> > + Status = mIoMmu->Map (
> > + mIoMmu,
> > + Operation,
> > + HostAddress,
> > + NumberOfBytes,
> > + DeviceAddress,
> > + Mapping
> > + );
> > + if (EFI_ERROR (Status)) {
> > + return EFI_OUT_OF_RESOURCES;
> > + }
> > + switch (Operation) {
> > + case EdkiiIoMmuOperationBusMasterRead:
> > + case EdkiiIoMmuOperationBusMasterRead64:
> > + Attribute = EDKII_IOMMU_ACCESS_READ;
> > + break;
> > + case EdkiiIoMmuOperationBusMasterWrite:
> > + case EdkiiIoMmuOperationBusMasterWrite64:
> > + Attribute = EDKII_IOMMU_ACCESS_WRITE;
> > + break;
> > + case EdkiiIoMmuOperationBusMasterCommonBuffer:
> > + case EdkiiIoMmuOperationBusMasterCommonBuffer64:
> > + Attribute = EDKII_IOMMU_ACCESS_READ |
> > EDKII_IOMMU_ACCESS_WRITE;
> > + break;
> > + default:
> > + ASSERT(FALSE);
> > + return EFI_INVALID_PARAMETER;
> > + }
> > + Status = mIoMmu->SetAttribute (
> > + mIoMmu,
> > + *Mapping,
> > + Attribute
> > + );
> > + if (EFI_ERROR (Status)) {
> > + return Status;
> > + }
> > + } else {
> > + *DeviceAddress = (EFI_PHYSICAL_ADDRESS)(UINTN)HostAddress;
> > + *Mapping = NULL;
> > + Status = EFI_SUCCESS;
> > + }
> > + return Status;
> > +}
> > +
> > +/**
> > + Completes the Map() operation and releases any corresponding resources.
> > +
> > + @param Mapping The mapping value returned from Map().
> > +
> > + @retval EFI_SUCCESS The range was unmapped.
> > + @retval EFI_INVALID_PARAMETER Mapping is not a value that was
> > returned by Map().
> > + @retval EFI_DEVICE_ERROR The data was not committed to the target
> > system memory.
> > +**/
> > +EFI_STATUS
> > +IoMmuUnmap (
> > + IN VOID *Mapping
> > + )
> > +{
> > + EFI_STATUS Status;
> > +
> > + if (mIoMmu != NULL) {
> > + Status = mIoMmu->SetAttribute (mIoMmu, Mapping, 0);
> > + Status = mIoMmu->Unmap (mIoMmu, Mapping);
> > + } else {
> > + Status = EFI_SUCCESS;
> > + }
> > + return Status;
> > +}
> > +
> > +/**
> > + Allocates pages that are suitable for an
> > OperationBusMasterCommonBuffer or
> > + OperationBusMasterCommonBuffer64 mapping.
> > +
> > + @param Pages The number of pages to allocate.
> > + @param HostAddress A pointer to store the base system memory
> > address of the
> > + allocated range.
> > + @param DeviceAddress The resulting map address for the bus
> master
> > PCI controller to use to
> > + access the hosts HostAddress.
> > + @param Mapping A resulting value to pass to Unmap().
> > +
> > + @retval EFI_SUCCESS The requested memory pages were allocated.
> > + @retval EFI_UNSUPPORTED Attributes is unsupported. The only legal
> > attribute bits are
> > + MEMORY_WRITE_COMBINE and MEMORY_CACHED.
> > + @retval EFI_INVALID_PARAMETER One or more parameters are invalid.
> > + @retval EFI_OUT_OF_RESOURCES The memory pages could not be
> > allocated.
> > +
> > +**/
> > +EFI_STATUS
> > +IoMmuAllocateBuffer (
> > + IN UINTN Pages,
> > + OUT VOID **HostAddress,
> > + OUT EFI_PHYSICAL_ADDRESS *DeviceAddress,
> > + OUT VOID **Mapping
> > + )
> > +{
> > + EFI_STATUS Status;
> > + UINTN NumberOfBytes;
> > + EFI_PHYSICAL_ADDRESS HostPhyAddress;
> > +
> > + *HostAddress = NULL;
> > + *DeviceAddress = 0;
> > +
> > + if (mIoMmu != NULL) {
> > + Status = mIoMmu->AllocateBuffer (
> > + mIoMmu,
> > + EfiBootServicesData,
> > + Pages,
> > + HostAddress,
> > + 0
> > + );
> > + if (EFI_ERROR (Status)) {
> > + return EFI_OUT_OF_RESOURCES;
> > + }
> > +
> > + NumberOfBytes = EFI_PAGES_TO_SIZE(Pages);
> > + Status = mIoMmu->Map (
> > + mIoMmu,
> > + EdkiiIoMmuOperationBusMasterCommonBuffer,
> > + *HostAddress,
> > + &NumberOfBytes,
> > + DeviceAddress,
> > + Mapping
> > + );
> > + if (EFI_ERROR (Status)) {
> > + return EFI_OUT_OF_RESOURCES;
> > + }
> > + Status = mIoMmu->SetAttribute (
> > + mIoMmu,
> > + *Mapping,
> > + EDKII_IOMMU_ACCESS_READ |
> EDKII_IOMMU_ACCESS_WRITE
> > + );
> > + if (EFI_ERROR (Status)) {
> > + return Status;
> > + }
> > + } else {
> > + Status = PeiServicesAllocatePages (
> > + EfiBootServicesData,
> > + Pages,
> > + &HostPhyAddress
> > + );
> > + if (EFI_ERROR (Status)) {
> > + return EFI_OUT_OF_RESOURCES;
> > + }
> > + *HostAddress = (VOID *)(UINTN)HostPhyAddress;
> > + *DeviceAddress = HostPhyAddress;
> > + *Mapping = NULL;
> > + }
> > + return Status;
> > +}
> > +
> > +/**
> > + Frees memory that was allocated with AllocateBuffer().
> > +
> > + @param Pages The number of pages to free.
> > + @param HostAddress The base system memory address of the
> > allocated range.
> > + @param Mapping The mapping value returned from Map().
> > +
> > + @retval EFI_SUCCESS The requested memory pages were freed.
> > + @retval EFI_INVALID_PARAMETER The memory range specified by
> > HostAddress and Pages
> > + was not allocated with AllocateBuffer().
> > +
> > +**/
> > +EFI_STATUS
> > +IoMmuFreeBuffer (
> > + IN UINTN Pages,
> > + IN VOID *HostAddress,
> > + IN VOID *Mapping
> > + )
> > +{
> > + EFI_STATUS Status;
> > +
> > + if (mIoMmu != NULL) {
> > + Status = mIoMmu->SetAttribute (mIoMmu, Mapping, 0);
> > + Status = mIoMmu->Unmap (mIoMmu, Mapping);
> > + Status = mIoMmu->FreeBuffer (mIoMmu, Pages, HostAddress);
> > + } else {
> > + Status = EFI_SUCCESS;
> > + }
> > + return Status;
> > +}
> > +
> > +/**
> > + Initialize IOMMU.
> > +**/
> > +VOID
> > +IoMmuInit (
> > + VOID
> > + )
> > +{
> > + PeiServicesLocatePpi (
> > + &gEdkiiIoMmuPpiGuid,
> > + 0,
> > + NULL,
> > + (VOID **)&mIoMmu
> > + );
> > +}
> > +
> > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.c
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.c
> > new file mode 100644
> > index 0000000000..0ba88385c9
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.c
> > @@ -0,0 +1,368 @@
> > +/** @file
> > + The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > + which follows NVM Express specification at PEI phase.
> > +
> > + Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions
> > + of the BSD License which accompanies this distribution. The
> > + full text of the license may be found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#include "NvmExpressPei.h"
> > +
> > +EFI_PEI_PPI_DESCRIPTOR mNvmeBlkIoPpiListTemplate = {
> > + EFI_PEI_PPI_DESCRIPTOR_PPI,
> > + &gEfiPeiVirtualBlockIoPpiGuid,
> > + NULL
> > +};
> > +
> > +EFI_PEI_PPI_DESCRIPTOR mNvmeBlkIo2PpiListTemplate = {
> > + EFI_PEI_PPI_DESCRIPTOR_PPI |
> > EFI_PEI_PPI_DESCRIPTOR_TERMINATE_LIST,
> > + &gEfiPeiVirtualBlockIo2PpiGuid,
> > + NULL
> > +};
> > +
> > +EFI_PEI_NOTIFY_DESCRIPTOR mNvmeEndOfPeiNotifyListTemplate = {
> > + (EFI_PEI_PPI_DESCRIPTOR_NOTIFY_CALLBACK |
> > EFI_PEI_PPI_DESCRIPTOR_TERMINATE_LIST),
> > + &gEfiEndOfPeiSignalPpiGuid,
> > + NvmePeimEndOfPei
> > +};
> > +
> > +/**
> > + Check if the specified Nvm Express device namespace is active, and then
> > get the Identify
> > + Namespace data.
> > +
> > + @param[in,out] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > + @param[in] NamespaceId The specified namespace identifier.
> > +
> > + @retval EFI_SUCCESS The specified namespace in the device is
> > successfully enumerated.
> > + @return Others Error occurs when enumerating the namespace.
> > +
> > +**/
> > +EFI_STATUS
> > +EnumerateNvmeDevNamespace (
> > + IN OUT PEI_NVME_CONTROLLER_PRIVATE_DATA *Private,
> > + IN UINT32 NamespaceId
> > + )
> > +{
> > + EFI_STATUS Status;
> > + NVME_ADMIN_NAMESPACE_DATA *NamespaceData;
> > + PEI_NVME_NAMESPACE_INFO *NamespaceInfo;
> > + UINT32 DeviceIndex;
> > + UINT32 Lbads;
> > + UINT32 Flbas;
> > + UINT32 LbaFmtIdx;
> > +
> > + NamespaceData = (NVME_ADMIN_NAMESPACE_DATA *)
> > AllocateZeroPool (sizeof (NVME_ADMIN_NAMESPACE_DATA));
> > + if (NamespaceData == NULL) {
> > + return EFI_OUT_OF_RESOURCES;
> > + }
> > +
> > + //
> > + // Identify Namespace
> > + //
> > + Status = NvmeIdentifyNamespace (
> > + Private,
> > + NamespaceId,
> > + NamespaceData
> > + );
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NvmeIdentifyNamespace fail, Status -
> > %r\n", __FUNCTION__, Status));
> > + goto Exit;
> > + }
> > +
> > + //
> > + // Validate Namespace
> > + //
> > + if (NamespaceData->Ncap == 0) {
> > + DEBUG ((DEBUG_INFO, "%a: Namespace ID %d is an inactive one.\n",
> > __FUNCTION__, NamespaceId));
> > + Status = EFI_DEVICE_ERROR;
> > + goto Exit;
> > + }
> > +
> > + DeviceIndex = Private->ActiveNamespaceNum;
> > + NamespaceInfo = &Private->NamespaceInfo[DeviceIndex];
> > + NamespaceInfo->NamespaceId = NamespaceId;
> > + NamespaceInfo->NamespaceUuid = NamespaceData->Eui64;
> > + NamespaceInfo->Controller = Private;
> > + Private->ActiveNamespaceNum++;
> > +
> > + //
> > + // Build BlockIo media structure
> > + //
> > + Flbas = NamespaceData->Flbas;
> > + LbaFmtIdx = Flbas & 0xF;
> > + Lbads = NamespaceData->LbaFormat[LbaFmtIdx].Lbads;
> > +
> > + NamespaceInfo->Media.InterfaceType = MSG_NVME_NAMESPACE_DP;
> > + NamespaceInfo->Media.RemovableMedia = FALSE;
> > + NamespaceInfo->Media.MediaPresent = TRUE;
> > + NamespaceInfo->Media.ReadOnly = FALSE;
> > + NamespaceInfo->Media.BlockSize = (UINT32) 1 << Lbads;
> > + NamespaceInfo->Media.LastBlock = (EFI_PEI_LBA) NamespaceData-
> > >Nsze - 1;
> > + DEBUG ((
> > + DEBUG_INFO,
> > + "%a: Namespace ID %d - BlockSize = 0x%x, LastBlock = 0x%lx\n",
> > + __FUNCTION__,
> > + NamespaceId,
> > + NamespaceInfo->Media.BlockSize,
> > + NamespaceInfo->Media.LastBlock
> > + ));
> > +
> > +Exit:
> > + if (NamespaceData != NULL) {
> > + FreePool (NamespaceData);
> > + }
> > +
> > + return Status;
> > +}
> > +
> > +/**
> > + Discover all Nvm Express device active namespaces.
> > +
> > + @param[in,out] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > + @retval EFI_SUCCESS All the namespaces in the device are successfully
> > enumerated.
> > + @return EFI_NOT_FOUND No active namespaces can be found.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeDiscoverNamespaces (
> > + IN OUT PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + )
> > +{
> > + UINT32 NamespaceId;
> > +
> > + Private->ActiveNamespaceNum = 0;
> > + Private->NamespaceInfo = AllocateZeroPool (Private->ControllerData-
> > >Nn * sizeof (PEI_NVME_NAMESPACE_INFO));
> > +
> > + //
> > + // According to Nvm Express 1.1 spec Figure 82, the field 'Nn' of the
> > identify
> > + // controller data defines the number of valid namespaces present for the
> > + // controller. Namespaces shall be allocated in order (starting with 1) and
> > + // packed sequentially.
> > + //
> > + for (NamespaceId = 1; NamespaceId <= Private->ControllerData->Nn;
> > NamespaceId++) {
> > + //
> > + // For now, we do not care the return status. Since if a valid namespace is
> > inactive,
> > + // error status will be returned. But we continue to enumerate other valid
> > namespaces.
> > + //
> > + EnumerateNvmeDevNamespace (Private, NamespaceId);
> > + }
> > + if (Private->ActiveNamespaceNum == 0) {
> > + return EFI_NOT_FOUND;
> > + }
> > +
> > + return EFI_SUCCESS;
> > +}
> > +
> > +/**
> > + One notified function to cleanup the allocated resources at the end of PEI.
> > +
> > + @param[in] PeiServices Pointer to PEI Services Table.
> > + @param[in] NotifyDescriptor Pointer to the descriptor for the
> > Notification
> > + event that caused this function to execute.
> > + @param[in] Ppi Pointer to the PPI data associated with this
> > function.
> > +
> > + @retval EFI_SUCCESS The function completes successfully
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmePeimEndOfPei (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_NOTIFY_DESCRIPTOR *NotifyDescriptor,
> > + IN VOID *Ppi
> > + )
> > +{
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > +
> > + Private = GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_NOTIFY
> > (NotifyDescriptor);
> > + NvmeDisableController (Private);
> > + NvmeFreeControllerResource (Private);
> > +
> > + return EFI_SUCCESS;
> > +}
> > +
> > +/**
> > + Entry point of the PEIM.
> > +
> > + @param[in] FileHandle Handle of the file being invoked.
> > + @param[in] PeiServices Describes the list of possible PEI Services.
> > +
> > + @retval EFI_SUCCESS PPI successfully installed.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmExpressPeimEntry (
> > + IN EFI_PEI_FILE_HANDLE FileHandle,
> > + IN CONST EFI_PEI_SERVICES **PeiServices
> > + )
> > +{
> > + EFI_STATUS Status;
> > + EFI_BOOT_MODE BootMode;
> > + EDKII_NVM_EXPRESS_HOST_CONTROLLER_PPI *NvmeHcPpi;
> > + UINT8 Controller;
> > + UINTN MmioBase;
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > + EFI_PHYSICAL_ADDRESS DeviceAddress;
> > +
> > + //
> > + // Shadow this PEIM to run from memory
> > + //
> > + if (!EFI_ERROR (PeiServicesRegisterForShadow (FileHandle))) {
> > + return EFI_SUCCESS;
> > + }
> > +
> > + Status = PeiServicesGetBootMode (&BootMode);
> > + //
> > + // Currently, the driver does not produce any PPI in S3 boot path
> > + //
> > + if (BootMode == BOOT_ON_S3_RESUME) {
> > + return EFI_SUCCESS;
> > + }
> > +
> > + //
> > + // Locate the NVME host controller PPI
> > + //
> > + Status = PeiServicesLocatePpi (
> > + &gEdkiiPeiNvmExpressHostControllerPpiGuid,
> > + 0,
> > + NULL,
> > + (VOID **) &NvmeHcPpi
> > + );
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: Fail to locate NvmeHostControllerPpi.\n",
> > __FUNCTION__));
> > + return EFI_UNSUPPORTED;
> > + }
> > +
> > + IoMmuInit ();
> > +
> > + Controller = 0;
> > + MmioBase = 0;
> > + while (TRUE) {
> > + Status = NvmeHcPpi->GetNvmeHcMmioBar (
> > + (EFI_PEI_SERVICES **) PeiServices,
> > + NvmeHcPpi,
> > + Controller,
> > + &MmioBase
> > + );
> > + //
> > + // When status is error, meant no controller is found
> > + //
> > + if (EFI_ERROR (Status)) {
> > + break;
> > + }
> > +
> > + //
> > + // Memory allocation for controller private data
> > + //
> > + Private = AllocateZeroPool (sizeof
> > (PEI_NVME_CONTROLLER_PRIVATE_DATA));
> > + if (Private == NULL) {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a: Fail to allocate private data for Controller %d.\n",
> > + __FUNCTION__,
> > + Controller
> > + ));
> > + return EFI_OUT_OF_RESOURCES;
> > + }
> > +
> > + //
> > + // Memory allocation for transfer-related data
> > + //
> > + Status = IoMmuAllocateBuffer (
> > + NVME_MEM_MAX_PAGES,
> > + &Private->Buffer,
> > + &DeviceAddress,
> > + &Private->BufferMapping
> > + );
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a: Fail to allocate DMA buffers for Controller %d.\n",
> > + __FUNCTION__,
> > + Controller
> > + ));
> > + NvmeFreeControllerResource (Private);
> > + return Status;
> > + }
> > + ASSERT (DeviceAddress == ((EFI_PHYSICAL_ADDRESS) (UINTN) Private-
> > >Buffer));
> > + DEBUG ((DEBUG_INFO, "%a: DMA buffer base at 0x%x\n",
> > __FUNCTION__, Private->Buffer));
> > +
> > + //
> > + // Initialize controller private data
> > + //
> > + Private->Signature =
> > NVME_PEI_CONTROLLER_PRIVATE_DATA_SIGNATURE;
> > + Private->MmioBase = MmioBase;
> > + Private->BlkIoPpi.GetNumberOfBlockDevices =
> > NvmeBlockIoPeimGetDeviceNo;
> > + Private->BlkIoPpi.GetBlockDeviceMediaInfo =
> > NvmeBlockIoPeimGetMediaInfo;
> > + Private->BlkIoPpi.ReadBlocks = NvmeBlockIoPeimReadBlocks;
> > + Private->BlkIo2Ppi.Revision =
> > EFI_PEI_RECOVERY_BLOCK_IO2_PPI_REVISION;
> > + Private->BlkIo2Ppi.GetNumberOfBlockDevices =
> > NvmeBlockIoPeimGetDeviceNo2;
> > + Private->BlkIo2Ppi.GetBlockDeviceMediaInfo =
> > NvmeBlockIoPeimGetMediaInfo2;
> > + Private->BlkIo2Ppi.ReadBlocks = NvmeBlockIoPeimReadBlocks2;
> > + Private->BlkIoPpiList = mNvmeBlkIoPpiListTemplate;
> > + Private->BlkIo2PpiList = mNvmeBlkIo2PpiListTemplate;
> > + Private->EndOfPeiNotifyList = mNvmeEndOfPeiNotifyListTemplate;
>
> 1. CopyMem() should be used for structure assignment.
>
> > + Private->BlkIoPpiList.Ppi = &Private->BlkIoPpi;
> > + Private->BlkIo2PpiList.Ppi = &Private->BlkIo2Ppi;
> > +
> > + //
> > + // Initialize the NVME controller
> > + //
> > + Status = NvmeControllerInit (Private);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a: Controller initialization fail for Controller %d with Status - %r.\n",
> > + __FUNCTION__,
> > + Controller,
> > + Status
> > + ));
> > + NvmeFreeControllerResource (Private);
> > + Controller++;
> > + continue;
> > + }
> > +
> > + //
> > + // Enumerate the NVME namespaces on the controller
> > + //
> > + Status = NvmeDiscoverNamespaces (Private);
> > + if (EFI_ERROR (Status)) {
> > + //
> > + // No active namespace was found on the controller
> > + //
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a: Namespaces discovery fail for Controller %d with Status - %r.\n",
> > + __FUNCTION__,
> > + Controller,
> > + Status
> > + ));
> > + NvmeFreeControllerResource (Private);
> > + Controller++;
> > + continue;
> > + }
> > +
> > + PeiServicesInstallPpi (&Private->BlkIoPpiList);
> > + PeiServicesNotifyPpi (&Private->EndOfPeiNotifyList);
> > + DEBUG ((
> > + DEBUG_INFO,
> > + "%a: BlockIO PPI has been installed on Controller %d.\n",
> > + __FUNCTION__,
> > + Controller
> > + ));
> > + Controller++;
> > + }
> > +
> > + return EFI_SUCCESS;
> > +}
> > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.h
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.h
> > new file mode 100644
> > index 0000000000..5e6f66892f
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.h
> > @@ -0,0 +1,265 @@
> > +/** @file
> > + The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > + which follows NVM Express specification at PEI phase.
> > +
> > + Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions
> > + of the BSD License which accompanies this distribution. The
> > + full text of the license may be found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#ifndef _NVM_EXPRESS_PEI_H_
> > +#define _NVM_EXPRESS_PEI_H_
> > +
> > +#include <PiPei.h>
> > +
> > +#include <IndustryStandard/Nvme.h>
> > +
> > +#include <Ppi/NvmExpressHostController.h>
> > +#include <Ppi/BlockIo.h>
> > +#include <Ppi/BlockIo2.h>
> > +#include <Ppi/IoMmu.h>
> > +#include <Ppi/EndOfPeiPhase.h>
> > +
> > +#include <Library/DebugLib.h>
> > +#include <Library/PeiServicesLib.h>
> > +#include <Library/MemoryAllocationLib.h>
> > +#include <Library/BaseMemoryLib.h>
> > +#include <Library/IoLib.h>
> > +#include <Library/PciLib.h>
> > +#include <Library/TimerLib.h>
> > +
> > +//
> > +// Structure forward declarations
> > +//
> > +typedef struct _PEI_NVME_NAMESPACE_INFO
> > PEI_NVME_NAMESPACE_INFO;
> > +typedef struct _PEI_NVME_CONTROLLER_PRIVATE_DATA
> > PEI_NVME_CONTROLLER_PRIVATE_DATA;
> > +
> > +#include "NvmExpressPeiHci.h"
> > +#include "NvmExpressPeiPassThru.h"
> > +#include "NvmExpressPeiBlockIo.h"
> > +
> > +//
> > +// NVME PEI driver implementation related definitions
> > +//
> > +#define NVME_MAX_QUEUES 2 // Number of I/O
> queues
> > supported by the driver, 1 for AQ, 1 for CQ
> > +#define NVME_ASQ_SIZE 1 // Number of admin
> > submission queue entries, which is 0-based
> > +#define NVME_ACQ_SIZE 1 // Number of admin
> > completion queue entries, which is 0-based
> > +#define NVME_CSQ_SIZE 63 // Number of I/O
> submission
> > queue entries, which is 0-based
> > +#define NVME_CCQ_SIZE 63 // Number of I/O
> completion
> > queue entries, which is 0-based
> > +#define NVME_PRP_SIZE (8) // Pages of PRP list
> > +
> > +#define NVME_MEM_MAX_PAGES \
> > + ( \
> > + 1 /* ASQ */ + \
> > + 1 /* ACQ */ + \
> > + 1 /* SQs */ + \
> > + 1 /* CQs */ + \
> > + NVME_PRP_SIZE) /* PRPs */
> > +
> > +#define NVME_ADMIN_QUEUE 0x00
> > +#define NVME_IO_QUEUE 0x01
> > +#define NVME_GENERIC_TIMEOUT 5000000 // Generic
> > PassThru command timeout value, in us unit
> > +#define NVME_POLL_INTERVAL 100 // Poll interval for
> > PassThru command, in us unit
> > +
> > +//
> > +// Nvme namespace data structure.
> > +//
> > +struct _PEI_NVME_NAMESPACE_INFO {
> > + UINT32 NamespaceId;
> > + UINT64 NamespaceUuid;
> > + EFI_PEI_BLOCK_IO2_MEDIA Media;
> > +
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Controller;
> > +};
> > +
> > +//
> > +// Unique signature for private data structure.
> > +//
> > +#define NVME_PEI_CONTROLLER_PRIVATE_DATA_SIGNATURE
> > SIGNATURE_32 ('N','V','P','C')
> > +
> > +//
> > +// Nvme controller private data structure.
> > +//
> > +struct _PEI_NVME_CONTROLLER_PRIVATE_DATA {
> > + UINT32 Signature;
> > + UINTN MmioBase;
> > + EFI_PEI_RECOVERY_BLOCK_IO_PPI BlkIoPpi;
> > + EFI_PEI_RECOVERY_BLOCK_IO2_PPI BlkIo2Ppi;
> > + EFI_PEI_PPI_DESCRIPTOR BlkIoPpiList;
> > + EFI_PEI_PPI_DESCRIPTOR BlkIo2PpiList;
> > + EFI_PEI_NOTIFY_DESCRIPTOR EndOfPeiNotifyList;
> > +
> > + //
> > + // Pointer to identify controller data
> > + //
> > + NVME_ADMIN_CONTROLLER_DATA *ControllerData;
> > +
> > + //
> > + // (4 + NVME_PRP_SIZE) x 4kB aligned buffers will be carved out of this
> > buffer
> > + // 1st 4kB boundary is the start of the admin submission queue
> > + // 2nd 4kB boundary is the start of the admin completion queue
> > + // 3rd 4kB boundary is the start of I/O submission queue
> > + // 4th 4kB boundary is the start of I/O completion queue
> > + // 5th 4kB boundary is the start of PRP list buffers
> > + //
> > + VOID *Buffer;
> > + VOID *BufferMapping;
> > +
> > + //
> > + // Pointers to 4kB aligned submission & completion queues
> > + //
> > + NVME_SQ *SqBuffer[NVME_MAX_QUEUES];
> > + NVME_CQ *CqBuffer[NVME_MAX_QUEUES];
> > +
> > + //
> > + // Submission and completion queue indices
> > + //
> > + NVME_SQTDBL SqTdbl[NVME_MAX_QUEUES];
> > + NVME_CQHDBL CqHdbl[NVME_MAX_QUEUES];
> > +
> > + UINT8 Pt[NVME_MAX_QUEUES];
> > + UINT16 Cid[NVME_MAX_QUEUES];
> > +
> > + //
> > + // Nvme controller capabilities
> > + //
> > + NVME_CAP Cap;
> > +
> > + //
> > + // Namespaces information on the controller
> > + //
> > + UINT32 ActiveNamespaceNum;
> > + PEI_NVME_NAMESPACE_INFO *NamespaceInfo;
> > +};
> > +
> > +#define GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_BLKIO(a) \
> > + CR (a, PEI_NVME_CONTROLLER_PRIVATE_DATA, BlkIoPpi,
> > NVME_PEI_CONTROLLER_PRIVATE_DATA_SIGNATURE)
> > +#define GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_BLKIO2(a) \
> > + CR (a, PEI_NVME_CONTROLLER_PRIVATE_DATA, BlkIo2Ppi,
> > NVME_PEI_CONTROLLER_PRIVATE_DATA_SIGNATURE)
> > +#define GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_NOTIFY(a) \
> > + CR (a, PEI_NVME_CONTROLLER_PRIVATE_DATA, EndOfPeiNotifyList,
> > NVME_PEI_CONTROLLER_PRIVATE_DATA_SIGNATURE)
> > +
> > +
> > +/**
> > + Initialize IOMMU.
> > +**/
> > +VOID
> > +IoMmuInit (
> > + VOID
> > + );
> > +
> > +/**
> > + Allocates pages that are suitable for an
> > OperationBusMasterCommonBuffer or
> > + OperationBusMasterCommonBuffer64 mapping.
> > +
> > + @param Pages The number of pages to allocate.
> > + @param HostAddress A pointer to store the base system memory
> > address of the
> > + allocated range.
> > + @param DeviceAddress The resulting map address for the bus
> master
> > PCI controller to use to
> > + access the hosts HostAddress.
> > + @param Mapping A resulting value to pass to Unmap().
> > +
> > + @retval EFI_SUCCESS The requested memory pages were allocated.
> > + @retval EFI_UNSUPPORTED Attributes is unsupported. The only legal
> > attribute bits are
> > + MEMORY_WRITE_COMBINE and MEMORY_CACHED.
> > + @retval EFI_INVALID_PARAMETER One or more parameters are invalid.
> > + @retval EFI_OUT_OF_RESOURCES The memory pages could not be
> > allocated.
> > +
> > +**/
> > +EFI_STATUS
> > +IoMmuAllocateBuffer (
> > + IN UINTN Pages,
> > + OUT VOID **HostAddress,
> > + OUT EFI_PHYSICAL_ADDRESS *DeviceAddress,
> > + OUT VOID **Mapping
> > + );
> > +
> > +/**
> > + Frees memory that was allocated with AllocateBuffer().
> > +
> > + @param Pages The number of pages to free.
> > + @param HostAddress The base system memory address of the
> > allocated range.
> > + @param Mapping The mapping value returned from Map().
> > +
> > + @retval EFI_SUCCESS The requested memory pages were freed.
> > + @retval EFI_INVALID_PARAMETER The memory range specified by
> > HostAddress and Pages
> > + was not allocated with AllocateBuffer().
> > +
> > +**/
> > +EFI_STATUS
> > +IoMmuFreeBuffer (
> > + IN UINTN Pages,
> > + IN VOID *HostAddress,
> > + IN VOID *Mapping
> > + );
> > +
> > +/**
> > + Provides the controller-specific addresses required to access system
> > memory from a
> > + DMA bus master.
> > +
> > + @param Operation Indicates if the bus master is going to read or
> > write to system memory.
> > + @param HostAddress The system memory address to map to the
> PCI
> > controller.
> > + @param NumberOfBytes On input the number of bytes to map. On
> > output the number of bytes
> > + that were mapped.
> > + @param DeviceAddress The resulting map address for the bus
> master
> > PCI controller to use to
> > + access the hosts HostAddress.
> > + @param Mapping A resulting value to pass to Unmap().
> > +
> > + @retval EFI_SUCCESS The range was mapped for the returned
> > NumberOfBytes.
> > + @retval EFI_UNSUPPORTED The HostAddress cannot be mapped as a
> > common buffer.
> > + @retval EFI_INVALID_PARAMETER One or more parameters are invalid.
> > + @retval EFI_OUT_OF_RESOURCES The request could not be completed
> > due to a lack of resources.
> > + @retval EFI_DEVICE_ERROR The system hardware could not map the
> > requested address.
> > +
> > +**/
> > +EFI_STATUS
> > +IoMmuMap (
> > + IN EDKII_IOMMU_OPERATION Operation,
> > + IN VOID *HostAddress,
> > + IN OUT UINTN *NumberOfBytes,
> > + OUT EFI_PHYSICAL_ADDRESS *DeviceAddress,
> > + OUT VOID **Mapping
> > + );
> > +
> > +/**
> > + Completes the Map() operation and releases any corresponding resources.
> > +
> > + @param Mapping The mapping value returned from Map().
> > +
> > + @retval EFI_SUCCESS The range was unmapped.
> > + @retval EFI_INVALID_PARAMETER Mapping is not a value that was
> > returned by Map().
> > + @retval EFI_DEVICE_ERROR The data was not committed to the target
> > system memory.
> > +**/
> > +EFI_STATUS
> > +IoMmuUnmap (
> > + IN VOID *Mapping
> > + );
> > +
> > +/**
> > + One notified function to cleanup the allocated resources at the end of PEI.
> > +
> > + @param[in] PeiServices Pointer to PEI Services Table.
> > + @param[in] NotifyDescriptor Pointer to the descriptor for the
> > Notification
> > + event that caused this function to execute.
> > + @param[in] Ppi Pointer to the PPI data associated with this
> > function.
> > +
> > + @retval EFI_SUCCESS The function completes successfully
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmePeimEndOfPei (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_NOTIFY_DESCRIPTOR *NotifyDescriptor,
> > + IN VOID *Ppi
> > + );
> > +
> > +#endif
> > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.inf
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.inf
> > new file mode 100644
> > index 0000000000..8437c815fa
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.inf
> > @@ -0,0 +1,70 @@
> > +## @file
> > +# The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > +# which follows NVM Express specification at PEI phase.
> > +#
> > +# Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +#
> > +# This program and the accompanying materials
> > +# are licensed and made available under the terms and conditions of the
> > BSD License
> > +# which accompanies this distribution. The full text of the license may be
> > found at
> > +# http://opensource.org/licenses/bsd-license.php
> > +#
> > +# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > +# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +#
> > +##
> > +
> > +[Defines]
> > + INF_VERSION = 0x00010005
> > + BASE_NAME = NvmExpressPei
> > + MODULE_UNI_FILE = NvmExpressPei.uni
> > + FILE_GUID = 94813714-E10A-4798-9909-8C904F66B4D9
> > + MODULE_TYPE = PEIM
> > + VERSION_STRING = 1.0
> > + ENTRY_POINT = NvmExpressPeimEntry
> > +
> > +#
> > +# The following information is for reference only and not required by the
> > build tools.
> > +#
> > +# VALID_ARCHITECTURES = IA32 X64 IPF EBC
>
> 2. IPF can be removed.
>
> > +#
> > +
> > +[Sources]
> > + DmaMem.c
> > + NvmExpressPei.c
> > + NvmExpressPei.h
> > + NvmExpressPeiBlockIo.c
> > + NvmExpressPeiBlockIo.h
> > + NvmExpressPeiHci.c
> > + NvmExpressPeiHci.h
> > + NvmExpressPeiPassThru.c
> > + NvmExpressPeiPassThru.h
> > +
> > +[Packages]
> > + MdePkg/MdePkg.dec
> > + MdeModulePkg/MdeModulePkg.dec
> > +
> > +[LibraryClasses]
> > + DebugLib
> > + PeiServicesLib
> > + MemoryAllocationLib
> > + BaseMemoryLib
> > + IoLib
> > + PciLib
> > + TimerLib
> > + PeimEntryPoint
> > +
> > +[Ppis]
> > + gEfiPeiVirtualBlockIoPpiGuid ## PRODUCES
> > + gEfiPeiVirtualBlockIo2PpiGuid ## PRODUCES
> > + gEdkiiPeiNvmExpressHostControllerPpiGuid ## CONSUMES
> > + gEdkiiIoMmuPpiGuid ## CONSUMES
> > + gEfiEndOfPeiSignalPpiGuid ## CONSUMES
> > +
> > +[Depex]
> > + gEfiPeiMemoryDiscoveredPpiGuid AND
> > + gEfiPeiMasterBootModePpiGuid AND
> > + gEdkiiPeiNvmExpressHostControllerPpiGuid
> > +
> > +[UserExtensions.TianoCore."ExtraFiles"]
> > + NvmExpressPeiExtra.uni
> > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.uni
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.uni
> > new file mode 100644
> > index 0000000000..1956800faf
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.uni
> > @@ -0,0 +1,21 @@
> > +// /** @file
> > +// The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > +// which follows NVM Express specification at PEI phase.
> > +//
> > +// Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +//
> > +// This program and the accompanying materials
> > +// are licensed and made available under the terms and conditions
> > +// of the BSD License which accompanies this distribution. The
> > +// full text of the license may be found at
> > +// http://opensource.org/licenses/bsd-license.php
> > +//
> > +// THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > +// WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +//
> > +// **/
> > +
> > +
> > +#string STR_MODULE_ABSTRACT #language en-US "Manage non-
> > volatile memory subsystem at PEI phase"
> > +
> > +#string STR_MODULE_DESCRIPTION #language en-US "The
> > NvmExpressPei driver is used to manage non-volatile memory subsystem
> > which follows NVM Express specification at PEI phase."
> > diff --git
> > a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiBlockIo.c
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiBlockIo.c
> > new file mode 100644
> > index 0000000000..033d263c91
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiBlockIo.c
> > @@ -0,0 +1,531 @@
> > +/** @file
> > + The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > + which follows NVM Express specification at PEI phase.
> > +
> > + Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions
> > + of the BSD License which accompanies this distribution. The
> > + full text of the license may be found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#include "NvmExpressPei.h"
> > +
> > +/**
> > + Read some sectors from the device.
> > +
> > + @param NamespaceInfo The pointer to the
> > PEI_NVME_NAMESPACE_INFO data structure.
> > + @param Buffer The buffer used to store the data read from the
> > device.
> > + @param Lba The start block number.
> > + @param Blocks Total block number to be read.
> > +
> > + @retval EFI_SUCCESS Data are read from the device.
> > + @retval Others Fail to read all the data.
> > +
> > +**/
> > +EFI_STATUS
> > +ReadSectors (
> > + IN PEI_NVME_NAMESPACE_INFO *NamespaceInfo,
> > + OUT UINTN Buffer,
> > + IN UINT64 Lba,
> > + IN UINT32 Blocks
> > + )
> > +{
> > + EFI_STATUS Status;
> > + UINT32 BlockSize;
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > + UINT32 Bytes;
> > + EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET
> > CommandPacket;
> > + EDKII_PEI_NVM_EXPRESS_COMMAND Command;
> > + EDKII_PEI_NVM_EXPRESS_COMPLETION Completion;
> > +
> > + Private = NamespaceInfo->Controller;
> > + BlockSize = NamespaceInfo->Media.BlockSize;
> > + Bytes = Blocks * BlockSize;
> > +
> > + ZeroMem (&CommandPacket,
> > sizeof(EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET));
> > + ZeroMem (&Command, sizeof(EDKII_PEI_NVM_EXPRESS_COMMAND));
> > + ZeroMem (&Completion,
> > sizeof(EDKII_PEI_NVM_EXPRESS_COMPLETION));
> > +
> > + CommandPacket.NvmeCmd = &Command;
> > + CommandPacket.NvmeCompletion = &Completion;
> > +
> > + CommandPacket.NvmeCmd->Cdw0.Opcode = NVME_IO_READ_OPC;
> > + CommandPacket.NvmeCmd->Nsid = NamespaceInfo->NamespaceId;
> > + CommandPacket.TransferBuffer = (VOID *)Buffer;
> > +
> > + CommandPacket.TransferLength = Bytes;
> > + CommandPacket.CommandTimeout = NVME_GENERIC_TIMEOUT;
> > + CommandPacket.QueueType = NVME_IO_QUEUE;
> > +
> > + CommandPacket.NvmeCmd->Cdw10 = (UINT32)Lba;
> > + CommandPacket.NvmeCmd->Cdw11 = (UINT32)RShiftU64(Lba, 32);
> > + CommandPacket.NvmeCmd->Cdw12 = (Blocks - 1) & 0xFFFF;
> > +
> > + CommandPacket.NvmeCmd->Flags = CDW10_VALID | CDW11_VALID |
> > CDW12_VALID;
> > +
> > + Status = NvmePassThru (
> > + Private,
> > + NamespaceInfo->NamespaceId,
> > + &CommandPacket
> > + );
> > + return Status;
> > +}
> > +
> > +/**
> > + Read some blocks from the device.
> > +
> > + @param[in] NamespaceInfo The pointer to the
> > PEI_NVME_NAMESPACE_INFO data structure.
> > + @param[out] Buffer The Buffer used to store the Data read from the
> > device.
> > + @param[in] Lba The start block number.
> > + @param[in] Blocks Total block number to be read.
> > +
> > + @retval EFI_SUCCESS Data are read from the device.
> > + @retval Others Fail to read all the data.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeRead (
> > + IN PEI_NVME_NAMESPACE_INFO *NamespaceInfo,
> > + OUT UINTN Buffer,
> > + IN UINT64 Lba,
> > + IN UINTN Blocks
> > + )
> > +{
> > + EFI_STATUS Status;
> > + UINT32 Retries;
> > + UINT32 BlockSize;
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > + UINT32 MaxTransferBlocks;
> > + UINTN OrginalBlocks;
> > +
> > + Status = EFI_SUCCESS;
> > + Retries = 0;
> > + Private = NamespaceInfo->Controller;
> > + BlockSize = NamespaceInfo->Media.BlockSize;
> > + OrginalBlocks = Blocks;
> > +
> > + if (Private->ControllerData->Mdts != 0) {
> > + MaxTransferBlocks = (1 << (Private->ControllerData->Mdts)) * (1 <<
> > (Private->Cap.Mpsmin + 12)) / BlockSize;
> > + } else {
> > + MaxTransferBlocks = 1024;
> > + }
> > + //
> > + //
> > + //
> > + DEBUG ((DEBUG_INFO, "%a: MaxTransferBlocks = 0x%x.\n",
> > __FUNCTION__, MaxTransferBlocks));
> > +
> > + while (Blocks > 0) {
> > + Status = ReadSectors (
> > + NamespaceInfo,
> > + Buffer,
> > + Lba,
> > + Blocks > MaxTransferBlocks ? MaxTransferBlocks : (UINT32)Blocks
> > + );
> > + if (EFI_ERROR(Status)) {
> > + Retries++;
> > + MaxTransferBlocks = MaxTransferBlocks >> 1;
> > +
> > + if (Retries > NVME_READ_MAX_RETRY || MaxTransferBlocks < 1) {
> > + DEBUG ((DEBUG_ERROR, "%a: ReadSectors fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + break;
> > + }
> > + DEBUG ((
> > + DEBUG_INFO,
> > + "%a: ReadSectors fail, retry with smaller transfer block number -
> > 0x%x\n",
> > + __FUNCTION__,
> > + MaxTransferBlocks
> > + ));
> > + continue;
> > + }
> > +
> > + if (Blocks > MaxTransferBlocks) {
> > + Blocks -= MaxTransferBlocks;
> > + Buffer += (MaxTransferBlocks * BlockSize);
> > + Lba += MaxTransferBlocks;
> > + } else {
> > + Blocks = 0;
> > + }
> > + }
> > +
> > + DEBUG ((DEBUG_INFO, "%a: Lba = 0x%08Lx, Original = 0x%08Lx, "
> > + "Remaining = 0x%08Lx, BlockSize = 0x%x, Status = %r\n", __FUNCTION__,
> > Lba,
> > + (UINT64)OrginalBlocks, (UINT64)Blocks, BlockSize, Status));
> > + return Status;
> > +}
> > +
> > +/**
> > + Gets the count of block I/O devices that one specific block driver detects.
> > +
> > + This function is used for getting the count of block I/O devices that one
> > + specific block driver detects. If no device is detected, then the function
> > + will return zero.
> > +
> > + @param[in] PeiServices General-purpose services that are available
> > + to every PEIM.
> > + @param[in] This Indicates the
> EFI_PEI_RECOVERY_BLOCK_IO_PPI
> > + instance.
> > + @param[out] NumberBlockDevices The number of block I/O devices
> > discovered.
> > +
> > + @retval EFI_SUCCESS The operation performed successfully.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimGetDeviceNo (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO_PPI *This,
> > + OUT UINTN *NumberBlockDevices
> > + )
> > +{
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > +
> > + if (This == NULL || NumberBlockDevices == NULL) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + Private = GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_BLKIO
> > (This);
> > + *NumberBlockDevices = Private->ActiveNamespaceNum;
> > +
> > + return EFI_SUCCESS;
> > +}
> > +
> > +/**
> > + Gets a block device's media information.
> > +
> > + This function will provide the caller with the specified block device's media
> > + information. If the media changes, calling this function will update the
> > media
> > + information accordingly.
> > +
> > + @param[in] PeiServices General-purpose services that are available to
> > every
> > + PEIM
> > + @param[in] This Indicates the EFI_PEI_RECOVERY_BLOCK_IO_PPI
> > instance.
> > + @param[in] DeviceIndex Specifies the block device to which the function
> > wants
> > + to talk. Because the driver that implements Block I/O
> > + PPIs will manage multiple block devices, the PPIs that
> > + want to talk to a single device must specify the
> > + device index that was assigned during the enumeration
> > + process. This index is a number from one to
> > + NumberBlockDevices.
> > + @param[out] MediaInfo The media information of the specified block
> > media.
> > + The caller is responsible for the ownership of this
> > + data structure.
> > +
> > + @par Note:
> > + The MediaInfo structure describes an enumeration of possible block
> > device
> > + types. This enumeration exists because no device paths are actually
> > passed
> > + across interfaces that describe the type or class of hardware that is
> > publishing
> > + the block I/O interface. This enumeration will allow for policy decisions
> > + in the Recovery PEIM, such as "Try to recover from legacy floppy first,
> > + LS-120 second, CD-ROM third." If there are multiple partitions
> abstracted
> > + by a given device type, they should be reported in ascending order; this
> > + order also applies to nested partitions, such as legacy MBR, where the
> > + outermost partitions would have precedence in the reporting order. The
> > + same logic applies to systems such as IDE that have precedence
> > relationships
> > + like "Master/Slave" or "Primary/Secondary". The master device should
> > be
> > + reported first, the slave second.
> > +
> > + @retval EFI_SUCCESS Media information about the specified block
> > device
> > + was obtained successfully.
> > + @retval EFI_DEVICE_ERROR Cannot get the media information due to a
> > hardware
> > + error.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimGetMediaInfo (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO_PPI *This,
> > + IN UINTN DeviceIndex,
> > + OUT EFI_PEI_BLOCK_IO_MEDIA *MediaInfo
> > + )
> > +{
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > +
> > + if (This == NULL || MediaInfo == NULL) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + Private = GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_BLKIO
> > (This);
> > +
> > + if ((DeviceIndex == 0) || (DeviceIndex > Private->ActiveNamespaceNum))
> > {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + MediaInfo->DeviceType = (EFI_PEI_BLOCK_DEVICE_TYPE)
> > EDKII_PEI_BLOCK_DEVICE_TYPE_NVME;
> > + MediaInfo->MediaPresent = TRUE;
> > + MediaInfo->LastBlock = (UINTN)Private->NamespaceInfo[DeviceIndex-
> > 1].Media.LastBlock;
> > + MediaInfo->BlockSize = Private->NamespaceInfo[DeviceIndex-
> > 1].Media.BlockSize;
> > +
> > + return EFI_SUCCESS;
> > +}
> > +
> > +/**
> > + Reads the requested number of blocks from the specified block device.
> > +
> > + The function reads the requested number of blocks from the device. All
> > the
> > + blocks are read, or an error is returned. If there is no media in the device,
> > + the function returns EFI_NO_MEDIA.
> > +
> > + @param[in] PeiServices General-purpose services that are available to
> > + every PEIM.
> > + @param[in] This Indicates the EFI_PEI_RECOVERY_BLOCK_IO_PPI
> > instance.
> > + @param[in] DeviceIndex Specifies the block device to which the function
> > wants
> > + to talk. Because the driver that implements Block I/O
> > + PPIs will manage multiple block devices, PPIs that
> > + want to talk to a single device must specify the device
> > + index that was assigned during the enumeration process.
> > + This index is a number from one to NumberBlockDevices.
> > + @param[in] StartLBA The starting logical block address (LBA) to read
> > from
> > + on the device
> > + @param[in] BufferSize The size of the Buffer in bytes. This number must
> > be
> > + a multiple of the intrinsic block size of the device.
> > + @param[out] Buffer A pointer to the destination buffer for the data.
> > + The caller is responsible for the ownership of the
> > + buffer.
> > +
> > + @retval EFI_SUCCESS The data was read correctly from the device.
> > + @retval EFI_DEVICE_ERROR The device reported an error while
> > attempting
> > + to perform the read operation.
> > + @retval EFI_INVALID_PARAMETER The read request contains LBAs that
> > are not
> > + valid, or the buffer is not properly aligned.
> > + @retval EFI_NO_MEDIA There is no media in the device.
> > + @retval EFI_BAD_BUFFER_SIZE The BufferSize parameter is not a
> > multiple of
> > + the intrinsic block size of the device.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimReadBlocks (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO_PPI *This,
> > + IN UINTN DeviceIndex,
> > + IN EFI_PEI_LBA StartLBA,
> > + IN UINTN BufferSize,
> > + OUT VOID *Buffer
> > + )
> > +{
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > + PEI_NVME_NAMESPACE_INFO *NamespaceInfo;
> > + UINT32 BlockSize;
> > + UINTN NumberOfBlocks;
> > +
> > + Private = GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_BLKIO
> > (This);
> > +
> > + //
> > + // Check parameters
> > + //
> > + if (This == NULL || Buffer == NULL) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + if (BufferSize == 0) {
> > + return EFI_SUCCESS;
> > + }
> > +
> > + if ((DeviceIndex == 0) || (DeviceIndex > Private->ActiveNamespaceNum))
> > {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + //
> > + // Check BufferSize and StartLBA
> > + //
> > + NamespaceInfo = &(Private->NamespaceInfo[DeviceIndex - 1]);
> > + BlockSize = NamespaceInfo->Media.BlockSize;
> > + if (BufferSize % BlockSize != 0) {
> > + return EFI_BAD_BUFFER_SIZE;
> > + }
> > +
> > + if (StartLBA > NamespaceInfo->Media.LastBlock) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > + NumberOfBlocks = BufferSize / BlockSize;
> > + if (NumberOfBlocks - 1 > NamespaceInfo->Media.LastBlock - StartLBA) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + return NvmeRead (NamespaceInfo, (UINTN)Buffer, StartLBA,
> > NumberOfBlocks);
> > +}
> > +
> > +/**
> > + Gets the count of block I/O devices that one specific block driver detects.
> > +
> > + This function is used for getting the count of block I/O devices that one
> > + specific block driver detects. If no device is detected, then the function
> > + will return zero.
> > +
> > + @param[in] PeiServices General-purpose services that are available
> > + to every PEIM.
> > + @param[in] This Indicates the
> > EFI_PEI_RECOVERY_BLOCK_IO2_PPI
> > + instance.
> > + @param[out] NumberBlockDevices The number of block I/O devices
> > discovered.
> > +
> > + @retval EFI_SUCCESS The operation performed successfully.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimGetDeviceNo2 (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO2_PPI *This,
> > + OUT UINTN *NumberBlockDevices
> > + )
> > +{
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > +
> > + if (This == NULL || NumberBlockDevices == NULL) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + Private = GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_BLKIO2
> > (This);
> > + *NumberBlockDevices = Private->ActiveNamespaceNum;
> > +
> > + return EFI_SUCCESS;
> > +}
> > +
> > +/**
> > + Gets a block device's media information.
> > +
> > + This function will provide the caller with the specified block device's media
> > + information. If the media changes, calling this function will update the
> > media
> > + information accordingly.
> > +
> > + @param[in] PeiServices General-purpose services that are available to
> > every
> > + PEIM
> > + @param[in] This Indicates the EFI_PEI_RECOVERY_BLOCK_IO2_PPI
> > instance.
> > + @param[in] DeviceIndex Specifies the block device to which the function
> > wants
> > + to talk. Because the driver that implements Block I/O
> > + PPIs will manage multiple block devices, the PPIs that
> > + want to talk to a single device must specify the
> > + device index that was assigned during the enumeration
> > + process. This index is a number from one to
> > + NumberBlockDevices.
> > + @param[out] MediaInfo The media information of the specified block
> > media.
> > + The caller is responsible for the ownership of this
> > + data structure.
> > +
> > + @par Note:
> > + The MediaInfo structure describes an enumeration of possible block
> > device
> > + types. This enumeration exists because no device paths are actually
> > passed
> > + across interfaces that describe the type or class of hardware that is
> > publishing
> > + the block I/O interface. This enumeration will allow for policy decisions
> > + in the Recovery PEIM, such as "Try to recover from legacy floppy first,
> > + LS-120 second, CD-ROM third." If there are multiple partitions
> abstracted
> > + by a given device type, they should be reported in ascending order; this
> > + order also applies to nested partitions, such as legacy MBR, where the
> > + outermost partitions would have precedence in the reporting order. The
> > + same logic applies to systems such as IDE that have precedence
> > relationships
> > + like "Master/Slave" or "Primary/Secondary". The master device should
> > be
> > + reported first, the slave second.
> > +
> > + @retval EFI_SUCCESS Media information about the specified block
> > device
> > + was obtained successfully.
> > + @retval EFI_DEVICE_ERROR Cannot get the media information due to a
> > hardware
> > + error.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimGetMediaInfo2 (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO2_PPI *This,
> > + IN UINTN DeviceIndex,
> > + OUT EFI_PEI_BLOCK_IO2_MEDIA *MediaInfo
> > + )
> > +{
> > + EFI_STATUS Status;
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > + EFI_PEI_BLOCK_IO_MEDIA Media;
> > +
> > + if (This == NULL || MediaInfo == NULL) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + Private = GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_BLKIO2
> > (This);
> > +
> > + Status = NvmeBlockIoPeimGetMediaInfo (
> > + PeiServices,
> > + &Private->BlkIoPpi,
> > + DeviceIndex,
> > + &Media
> > + );
> > + if (EFI_ERROR (Status)) {
> > + return Status;
> > + }
> > +
> > + CopyMem (
> > + MediaInfo,
> > + &(Private->NamespaceInfo[DeviceIndex - 1].Media),
> > + sizeof (EFI_PEI_BLOCK_IO2_MEDIA)
> > + );
> > +
> > + return EFI_SUCCESS;
> > +}
> > +
> > +/**
> > + Reads the requested number of blocks from the specified block device.
> > +
> > + The function reads the requested number of blocks from the device. All
> > the
> > + blocks are read, or an error is returned. If there is no media in the device,
> > + the function returns EFI_NO_MEDIA.
> > +
> > + @param[in] PeiServices General-purpose services that are available to
> > + every PEIM.
> > + @param[in] This Indicates the EFI_PEI_RECOVERY_BLOCK_IO2_PPI
> > instance.
> > + @param[in] DeviceIndex Specifies the block device to which the function
> > wants
> > + to talk. Because the driver that implements Block I/O
> > + PPIs will manage multiple block devices, PPIs that
> > + want to talk to a single device must specify the device
> > + index that was assigned during the enumeration process.
> > + This index is a number from one to NumberBlockDevices.
> > + @param[in] StartLBA The starting logical block address (LBA) to read
> > from
> > + on the device
> > + @param[in] BufferSize The size of the Buffer in bytes. This number must
> > be
> > + a multiple of the intrinsic block size of the device.
> > + @param[out] Buffer A pointer to the destination buffer for the data.
> > + The caller is responsible for the ownership of the
> > + buffer.
> > +
> > + @retval EFI_SUCCESS The data was read correctly from the device.
> > + @retval EFI_DEVICE_ERROR The device reported an error while
> > attempting
> > + to perform the read operation.
> > + @retval EFI_INVALID_PARAMETER The read request contains LBAs that
> > are not
> > + valid, or the buffer is not properly aligned.
> > + @retval EFI_NO_MEDIA There is no media in the device.
> > + @retval EFI_BAD_BUFFER_SIZE The BufferSize parameter is not a
> > multiple of
> > + the intrinsic block size of the device.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimReadBlocks2 (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO2_PPI *This,
> > + IN UINTN DeviceIndex,
> > + IN EFI_PEI_LBA StartLBA,
> > + IN UINTN BufferSize,
> > + OUT VOID *Buffer
> > + )
> > +{
> > + PEI_NVME_CONTROLLER_PRIVATE_DATA *Private;
> > +
> > + if (This == NULL) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + Private = GET_NVME_PEIM_HC_PRIVATE_DATA_FROM_THIS_BLKIO2
> > (This);
> > + return NvmeBlockIoPeimReadBlocks (
> > + PeiServices,
> > + &Private->BlkIoPpi,
> > + DeviceIndex,
> > + StartLBA,
> > + BufferSize,
> > + Buffer
> > + );
> > +}
> > diff --git
> > a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiBlockIo.h
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiBlockIo.h
> > new file mode 100644
> > index 0000000000..76e5970fe7
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiBlockIo.h
> > @@ -0,0 +1,266 @@
> > +/** @file
> > + The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > + which follows NVM Express specification at PEI phase.
> > +
> > + Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions
> > + of the BSD License which accompanies this distribution. The
> > + full text of the license may be found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#ifndef _NVM_EXPRESS_PEI_BLOCKIO_H_
> > +#define _NVM_EXPRESS_PEI_BLOCKIO_H_
> > +
> > +//
> > +// Nvme device for EFI_PEI_BLOCK_DEVICE_TYPE
> > +//
> > +#define EDKII_PEI_BLOCK_DEVICE_TYPE_NVME 7
> > +
> > +#define NVME_READ_MAX_RETRY 3
> > +
> > +/**
> > + Gets the count of block I/O devices that one specific block driver detects.
> > +
> > + This function is used for getting the count of block I/O devices that one
> > + specific block driver detects. If no device is detected, then the function
> > + will return zero.
> > +
> > + @param[in] PeiServices General-purpose services that are available
> > + to every PEIM.
> > + @param[in] This Indicates the
> EFI_PEI_RECOVERY_BLOCK_IO_PPI
> > + instance.
> > + @param[out] NumberBlockDevices The number of block I/O devices
> > discovered.
> > +
> > + @retval EFI_SUCCESS The operation performed successfully.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimGetDeviceNo (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO_PPI *This,
> > + OUT UINTN *NumberBlockDevices
> > + );
> > +
> > +/**
> > + Gets a block device's media information.
> > +
> > + This function will provide the caller with the specified block device's media
> > + information. If the media changes, calling this function will update the
> > media
> > + information accordingly.
> > +
> > + @param[in] PeiServices General-purpose services that are available to
> > every
> > + PEIM
> > + @param[in] This Indicates the EFI_PEI_RECOVERY_BLOCK_IO_PPI
> > instance.
> > + @param[in] DeviceIndex Specifies the block device to which the function
> > wants
> > + to talk. Because the driver that implements Block I/O
> > + PPIs will manage multiple block devices, the PPIs that
> > + want to talk to a single device must specify the
> > + device index that was assigned during the enumeration
> > + process. This index is a number from one to
> > + NumberBlockDevices.
> > + @param[out] MediaInfo The media information of the specified block
> > media.
> > + The caller is responsible for the ownership of this
> > + data structure.
> > +
> > + @par Note:
> > + The MediaInfo structure describes an enumeration of possible block
> > device
> > + types. This enumeration exists because no device paths are actually
> > passed
> > + across interfaces that describe the type or class of hardware that is
> > publishing
> > + the block I/O interface. This enumeration will allow for policy decisions
> > + in the Recovery PEIM, such as "Try to recover from legacy floppy first,
> > + LS-120 second, CD-ROM third." If there are multiple partitions
> abstracted
> > + by a given device type, they should be reported in ascending order; this
> > + order also applies to nested partitions, such as legacy MBR, where the
> > + outermost partitions would have precedence in the reporting order. The
> > + same logic applies to systems such as IDE that have precedence
> > relationships
> > + like "Master/Slave" or "Primary/Secondary". The master device should
> > be
> > + reported first, the slave second.
> > +
> > + @retval EFI_SUCCESS Media information about the specified block
> > device
> > + was obtained successfully.
> > + @retval EFI_DEVICE_ERROR Cannot get the media information due to a
> > hardware
> > + error.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimGetMediaInfo (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO_PPI *This,
> > + IN UINTN DeviceIndex,
> > + OUT EFI_PEI_BLOCK_IO_MEDIA *MediaInfo
> > + );
> > +
> > +/**
> > + Reads the requested number of blocks from the specified block device.
> > +
> > + The function reads the requested number of blocks from the device. All
> > the
> > + blocks are read, or an error is returned. If there is no media in the device,
> > + the function returns EFI_NO_MEDIA.
> > +
> > + @param[in] PeiServices General-purpose services that are available to
> > + every PEIM.
> > + @param[in] This Indicates the EFI_PEI_RECOVERY_BLOCK_IO_PPI
> > instance.
> > + @param[in] DeviceIndex Specifies the block device to which the function
> > wants
> > + to talk. Because the driver that implements Block I/O
> > + PPIs will manage multiple block devices, PPIs that
> > + want to talk to a single device must specify the device
> > + index that was assigned during the enumeration process.
> > + This index is a number from one to NumberBlockDevices.
> > + @param[in] StartLBA The starting logical block address (LBA) to read
> > from
> > + on the device
> > + @param[in] BufferSize The size of the Buffer in bytes. This number must
> > be
> > + a multiple of the intrinsic block size of the device.
> > + @param[out] Buffer A pointer to the destination buffer for the data.
> > + The caller is responsible for the ownership of the
> > + buffer.
> > +
> > + @retval EFI_SUCCESS The data was read correctly from the device.
> > + @retval EFI_DEVICE_ERROR The device reported an error while
> > attempting
> > + to perform the read operation.
> > + @retval EFI_INVALID_PARAMETER The read request contains LBAs that
> > are not
> > + valid, or the buffer is not properly aligned.
> > + @retval EFI_NO_MEDIA There is no media in the device.
> > + @retval EFI_BAD_BUFFER_SIZE The BufferSize parameter is not a
> > multiple of
> > + the intrinsic block size of the device.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimReadBlocks (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO_PPI *This,
> > + IN UINTN DeviceIndex,
> > + IN EFI_PEI_LBA StartLBA,
> > + IN UINTN BufferSize,
> > + OUT VOID *Buffer
> > + );
> > +
> > +/**
> > + Gets the count of block I/O devices that one specific block driver detects.
> > +
> > + This function is used for getting the count of block I/O devices that one
> > + specific block driver detects. If no device is detected, then the function
> > + will return zero.
> > +
> > + @param[in] PeiServices General-purpose services that are available
> > + to every PEIM.
> > + @param[in] This Indicates the
> > EFI_PEI_RECOVERY_BLOCK_IO2_PPI
> > + instance.
> > + @param[out] NumberBlockDevices The number of block I/O devices
> > discovered.
> > +
> > + @retval EFI_SUCCESS The operation performed successfully.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimGetDeviceNo2 (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO2_PPI *This,
> > + OUT UINTN *NumberBlockDevices
> > + );
> > +
> > +/**
> > + Gets a block device's media information.
> > +
> > + This function will provide the caller with the specified block device's media
> > + information. If the media changes, calling this function will update the
> > media
> > + information accordingly.
> > +
> > + @param[in] PeiServices General-purpose services that are available to
> > every
> > + PEIM
> > + @param[in] This Indicates the EFI_PEI_RECOVERY_BLOCK_IO2_PPI
> > instance.
> > + @param[in] DeviceIndex Specifies the block device to which the function
> > wants
> > + to talk. Because the driver that implements Block I/O
> > + PPIs will manage multiple block devices, the PPIs that
> > + want to talk to a single device must specify the
> > + device index that was assigned during the enumeration
> > + process. This index is a number from one to
> > + NumberBlockDevices.
> > + @param[out] MediaInfo The media information of the specified block
> > media.
> > + The caller is responsible for the ownership of this
> > + data structure.
> > +
> > + @par Note:
> > + The MediaInfo structure describes an enumeration of possible block
> > device
> > + types. This enumeration exists because no device paths are actually
> > passed
> > + across interfaces that describe the type or class of hardware that is
> > publishing
> > + the block I/O interface. This enumeration will allow for policy decisions
> > + in the Recovery PEIM, such as "Try to recover from legacy floppy first,
> > + LS-120 second, CD-ROM third." If there are multiple partitions
> abstracted
> > + by a given device type, they should be reported in ascending order; this
> > + order also applies to nested partitions, such as legacy MBR, where the
> > + outermost partitions would have precedence in the reporting order. The
> > + same logic applies to systems such as IDE that have precedence
> > relationships
> > + like "Master/Slave" or "Primary/Secondary". The master device should
> > be
> > + reported first, the slave second.
> > +
> > + @retval EFI_SUCCESS Media information about the specified block
> > device
> > + was obtained successfully.
> > + @retval EFI_DEVICE_ERROR Cannot get the media information due to a
> > hardware
> > + error.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimGetMediaInfo2 (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO2_PPI *This,
> > + IN UINTN DeviceIndex,
> > + OUT EFI_PEI_BLOCK_IO2_MEDIA *MediaInfo
> > + );
> > +
> > +/**
> > + Reads the requested number of blocks from the specified block device.
> > +
> > + The function reads the requested number of blocks from the device. All
> > the
> > + blocks are read, or an error is returned. If there is no media in the device,
> > + the function returns EFI_NO_MEDIA.
> > +
> > + @param[in] PeiServices General-purpose services that are available to
> > + every PEIM.
> > + @param[in] This Indicates the EFI_PEI_RECOVERY_BLOCK_IO2_PPI
> > instance.
> > + @param[in] DeviceIndex Specifies the block device to which the function
> > wants
> > + to talk. Because the driver that implements Block I/O
> > + PPIs will manage multiple block devices, PPIs that
> > + want to talk to a single device must specify the device
> > + index that was assigned during the enumeration process.
> > + This index is a number from one to NumberBlockDevices.
> > + @param[in] StartLBA The starting logical block address (LBA) to read
> > from
> > + on the device
> > + @param[in] BufferSize The size of the Buffer in bytes. This number must
> > be
> > + a multiple of the intrinsic block size of the device.
> > + @param[out] Buffer A pointer to the destination buffer for the data.
> > + The caller is responsible for the ownership of the
> > + buffer.
> > +
> > + @retval EFI_SUCCESS The data was read correctly from the device.
> > + @retval EFI_DEVICE_ERROR The device reported an error while
> > attempting
> > + to perform the read operation.
> > + @retval EFI_INVALID_PARAMETER The read request contains LBAs that
> > are not
> > + valid, or the buffer is not properly aligned.
> > + @retval EFI_NO_MEDIA There is no media in the device.
> > + @retval EFI_BAD_BUFFER_SIZE The BufferSize parameter is not a
> > multiple of
> > + the intrinsic block size of the device.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +NvmeBlockIoPeimReadBlocks2 (
> > + IN EFI_PEI_SERVICES **PeiServices,
> > + IN EFI_PEI_RECOVERY_BLOCK_IO2_PPI *This,
> > + IN UINTN DeviceIndex,
> > + IN EFI_PEI_LBA StartLBA,
> > + IN UINTN BufferSize,
> > + OUT VOID *Buffer
> > + );
> > +
> > +#endif
> > diff --git
> > a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiExtra.uni
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiExtra.uni
> > new file mode 100644
> > index 0000000000..8c97c0a8a9
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiExtra.uni
> > @@ -0,0 +1,19 @@
> > +// /** @file
> > +// NvmExpressPei Localized Strings and Content
> > +//
> > +// Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +//
> > +// This program and the accompanying materials
> > +// are licensed and made available under the terms and conditions
> > +// of the BSD License which accompanies this distribution. The
> > +// full text of the license may be found at
> > +// http://opensource.org/licenses/bsd-license.php
> > +//
> > +// THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > +// WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +//
> > +// **/
> > +
> > +#string STR_PROPERTIES_MODULE_NAME
> > +#language en-US
> > +"NVM Express Peim"
> > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiHci.c
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiHci.c
> > new file mode 100644
> > index 0000000000..d4056a2a5b
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiHci.c
> > @@ -0,0 +1,748 @@
> > +/** @file
> > + The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > + which follows NVM Express specification at PEI phase.
> > +
> > + Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions
> > + of the BSD License which accompanies this distribution. The
> > + full text of the license may be found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#include "NvmExpressPei.h"
> > +
> > +/**
> > + Transfer MMIO Data to memory.
> > +
> > + @param[in,out] MemBuffer Destination: Memory address.
> > + @param[in] MmioAddr Source: MMIO address.
> > + @param[in] Size Size for read.
> > +
> > + @retval EFI_SUCCESS MMIO read sucessfully.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeMmioRead (
> > + IN OUT VOID *MemBuffer,
> > + IN UINTN MmioAddr,
> > + IN UINTN Size
> > + )
> > +{
> > + UINTN Offset;
> > + UINT8 Data;
> > + UINT8 *Ptr;
> > +
> > + // priority has adjusted
> > + switch (Size) {
> > + case 4:
> > + *((UINT32 *)MemBuffer) = MmioRead32 (MmioAddr);
> > + break;
> > +
> > + case 8:
> > + *((UINT64 *)MemBuffer) = MmioRead64 (MmioAddr);
> > + break;
> > +
> > + case 2:
> > + *((UINT16 *)MemBuffer) = MmioRead16 (MmioAddr);
> > + break;
> > +
> > + case 1:
> > + *((UINT8 *)MemBuffer) = MmioRead8 (MmioAddr);
> > + break;
> > +
> > + default:
> > + Ptr = (UINT8 *)MemBuffer;
> > + for (Offset = 0; Offset < Size; Offset += 1) {
> > + Data = MmioRead8 (MmioAddr + Offset);
> > + Ptr[Offset] = Data;
> > + }
> > + break;
> > + }
> > +
> > + return EFI_SUCCESS;
> > +}
> > +
> > +/**
> > + Transfer memory data to MMIO.
> > +
> > + @param[in,out] MmioAddr Destination: MMIO address.
> > + @param[in] MemBuffer Source: Memory address.
> > + @param[in] Size Size for write.
> > +
> > + @retval EFI_SUCCESS MMIO write sucessfully.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeMmioWrite (
> > + IN OUT UINTN MmioAddr,
> > + IN VOID *MemBuffer,
> > + IN UINTN Size
> > + )
> > +{
> > + UINTN Offset;
> > + UINT8 Data;
> > + UINT8 *Ptr;
> > +
> > + // priority has adjusted
> > + switch (Size) {
> > + case 4:
> > + MmioWrite32 (MmioAddr, *((UINT32 *)MemBuffer));
> > + break;
> > +
> > + case 8:
> > + MmioWrite64 (MmioAddr, *((UINT64 *)MemBuffer));
> > + break;
> > +
> > + case 2:
> > + MmioWrite16 (MmioAddr, *((UINT16 *)MemBuffer));
> > + break;
> > +
> > + case 1:
> > + MmioWrite8 (MmioAddr, *((UINT8 *)MemBuffer));
> > + break;
> > +
> > + default:
> > + Ptr = (UINT8 *)MemBuffer;
> > + for (Offset = 0; Offset < Size; Offset += 1) {
> > + Data = Ptr[Offset];
> > + MmioWrite8 (MmioAddr + Offset, Data);
> > + }
> > + break;
> > + }
> > +
> > + return EFI_SUCCESS;
> > +}
> > +
> > +/**
> > + Get the page offset for specific NVME based memory.
> > +
> > + @param[in] BaseMemIndex The Index of BaseMem (0-based).
> > +
> > + @retval - The page count for specific BaseMem Index
> > +
> > +**/
> > +UINT32
> > +NvmeBaseMemPageOffset (
> > + IN UINTN BaseMemIndex
> > + )
> > +{
> > + UINT32 Pages;
> > + UINTN Index;
> > + UINT32 PageSizeList[5];
> > +
> > + PageSizeList[0] = 1; /* ASQ */
> > + PageSizeList[1] = 1; /* ACQ */
> > + PageSizeList[2] = 1; /* SQs */
> > + PageSizeList[3] = 1; /* CQs */
> > + PageSizeList[4] = NVME_PRP_SIZE; /* PRPs */
> > +
> > + if (BaseMemIndex > MAX_BASEMEM_COUNT) {
> > + DEBUG ((DEBUG_ERROR, "%a: The input BaseMem index is invalid.\n",
> > __FUNCTION__));
> > + ASSERT (FALSE);
> > + return 0;
> > + }
> > +
> > + Pages = 0;
> > + for (Index = 0; Index < BaseMemIndex; Index++) {
> > + Pages += PageSizeList[Index];
> > + }
> > +
> > + return Pages;
> > +}
> > +
> > +/**
> > + Wait for NVME controller status to be ready or not.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > + @param[in] WaitReady Flag for waitting status ready or not.
> > +
> > + @return EFI_SUCCESS Successfully to wait specific status.
> > + @return others Fail to wait for specific controller status.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeWaitController (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private,
> > + IN BOOLEAN WaitReady
> > + )
> > +{
> > + NVME_CSTS Csts;
> > + EFI_STATUS Status;
> > + UINT32 Index;
> > + UINT8 Timeout;
> > +
> > + //
> > + // Cap.To specifies max delay time in 500ms increments for Csts.Rdy to set
> > after
> > + // Cc.Enable. Loop produces a 1 millisecond delay per itteration, up to 500
> *
> > Cap.To.
> > + //
> > + if (Private->Cap.To == 0) {
> > + Timeout = 1;
> > + } else {
> > + Timeout = Private->Cap.To;
> > + }
> > +
> > + Status = EFI_SUCCESS;
> > + for(Index = (Timeout * 500); Index != 0; --Index) {
> > + MicroSecondDelay (1000);
> > +
> > + //
> > + // Check if the controller is initialized
> > + //
> > + Status = NVME_GET_CSTS (Private, &Csts);
> > + if (EFI_ERROR(Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NVME_GET_CSTS fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + return Status;
> > + }
> > +
> > + if ((BOOLEAN) Csts.Rdy == WaitReady) {
> > + break;
> > + }
> > + }
> > +
> > + if (Index == 0) {
> > + Status = EFI_TIMEOUT;
> > + }
> > +
> > + return Status;
> > +}
> > +
> > +/**
> > + Disable the Nvm Express controller.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > + @return EFI_SUCCESS Successfully disable the controller.
> > + @return others Fail to disable the controller.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeDisableController (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + )
> > +{
> > + NVME_CC Cc;
> > + NVME_CSTS Csts;
> > + EFI_STATUS Status;
> > +
> > + Status = NVME_GET_CSTS (Private, &Csts);
> > +
> > + //
> > + // Read Controller Configuration Register.
> > + //
> > + Status = NVME_GET_CC (Private, &Cc);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NVME_GET_CC fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + goto ErrorExit;
> > + }
> > +
> > + if (Cc.En == 1) {
> > + Cc.En = 0;
> > + //
> > + // Disable the controller.
> > + //
> > + Status = NVME_SET_CC (Private, &Cc);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NVME_SET_CC fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + goto ErrorExit;
> > + }
> > + }
> > +
> > + Status = NvmeWaitController (Private, FALSE);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NvmeWaitController fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + goto ErrorExit;
> > + }
> > +
> > + return EFI_SUCCESS;
> > +
> > +ErrorExit:
> > + DEBUG ((DEBUG_ERROR, "%a fail, Status - %r\n", __FUNCTION__, Status));
> > + return Status;
> > +}
> > +
> > +/**
> > + Enable the Nvm Express controller.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > + @return EFI_SUCCESS Successfully enable the controller.
> > + @return EFI_DEVICE_ERROR Fail to enable the controller.
> > + @return EFI_TIMEOUT Fail to enable the controller in given time slot.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeEnableController (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + )
> > +{
> > + NVME_CC Cc;
> > + EFI_STATUS Status;
> > +
> > + //
> > + // Enable the controller
> > + // CC.AMS, CC.MPS and CC.CSS are all set to 0
> > + //
> > + ZeroMem (&Cc, sizeof (NVME_CC));
> > + Cc.En = 1;
> > + Cc.Iosqes = 6;
> > + Cc.Iocqes = 4;
> > + Status = NVME_SET_CC (Private, &Cc);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NVME_SET_CC fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + goto ErrorExit;
> > + }
> > +
> > + Status = NvmeWaitController (Private, TRUE);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NvmeWaitController fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + goto ErrorExit;
> > + }
> > +
> > + return EFI_SUCCESS;
> > +
> > +ErrorExit:
> > + DEBUG ((DEBUG_ERROR, "%a fail, Status: %r\n", __FUNCTION__, Status));
> > + return Status;
> > +}
> > +
> > +/**
> > + Get the Identify Controller data.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > + @param[in] Buffer The Buffer used to store the Identify Controller data.
> > +
> > + @return EFI_SUCCESS Successfully get the Identify Controller data.
> > + @return others Fail to get the Identify Controller data.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeIdentifyController (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private,
> > + IN VOID *Buffer
> > + )
> > +{
> > + EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET
> > CommandPacket;
> > + EDKII_PEI_NVM_EXPRESS_COMMAND Command;
> > + EDKII_PEI_NVM_EXPRESS_COMPLETION Completion;
> > + EFI_STATUS Status;
> > +
> > + ZeroMem (&CommandPacket,
> > sizeof(EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET));
> > + ZeroMem (&Command, sizeof(EDKII_PEI_NVM_EXPRESS_COMMAND));
> > + ZeroMem (&Completion,
> > sizeof(EDKII_PEI_NVM_EXPRESS_COMPLETION));
> > +
> > + Command.Cdw0.Opcode = NVME_ADMIN_IDENTIFY_CMD;
> > + //
> > + // According to Nvm Express 1.1 spec Figure 38, When not used, the field
> > shall be cleared to 0h.
> > + // For the Identify command, the Namespace Identifier is only used for the
> > Namespace Data structure.
> > + //
> > + Command.Nsid = 0;
> > +
> > + CommandPacket.NvmeCmd = &Command;
> > + CommandPacket.NvmeCompletion = &Completion;
> > + CommandPacket.TransferBuffer = Buffer;
> > + CommandPacket.TransferLength = sizeof
> > (NVME_ADMIN_CONTROLLER_DATA);
> > + CommandPacket.CommandTimeout = NVME_GENERIC_TIMEOUT;
> > + CommandPacket.QueueType = NVME_ADMIN_QUEUE;
> > + //
> > + // Set bit 0 (Cns bit) to 1 to identify the controller
> > + //
> > + CommandPacket.NvmeCmd->Cdw10 = 1;
> > + CommandPacket.NvmeCmd->Flags = CDW10_VALID;
> > +
> > + Status = NvmePassThru (
> > + Private,
> > + NVME_CONTROLLER_NSID,
> > + &CommandPacket
> > + );
> > + return Status;
> > +}
> > +
> > +/**
> > + Get specified identify namespace data.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > + @param[in] NamespaceId The specified namespace identifier.
> > + @param[in] Buffer The buffer used to store the identify namespace
> > data.
> > +
> > + @return EFI_SUCCESS Successfully get the identify namespace data.
> > + @return EFI_DEVICE_ERROR Fail to get the identify namespace data.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeIdentifyNamespace (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private,
> > + IN UINT32 NamespaceId,
> > + IN VOID *Buffer
> > + )
> > +{
> > + EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET
> > CommandPacket;
> > + EDKII_PEI_NVM_EXPRESS_COMMAND Command;
> > + EDKII_PEI_NVM_EXPRESS_COMPLETION Completion;
> > + EFI_STATUS Status;
> > +
> > + ZeroMem (&CommandPacket,
> > sizeof(EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET));
> > + ZeroMem (&Command, sizeof(EDKII_PEI_NVM_EXPRESS_COMMAND));
> > + ZeroMem (&Completion,
> > sizeof(EDKII_PEI_NVM_EXPRESS_COMPLETION));
> > +
> > + Command.Cdw0.Opcode = NVME_ADMIN_IDENTIFY_CMD;
> > + Command.Nsid = NamespaceId;
> > +
> > + CommandPacket.NvmeCmd = &Command;
> > + CommandPacket.NvmeCompletion = &Completion;
> > + CommandPacket.TransferBuffer = Buffer;
> > + CommandPacket.TransferLength = sizeof
> > (NVME_ADMIN_NAMESPACE_DATA);
> > + CommandPacket.CommandTimeout = NVME_GENERIC_TIMEOUT;
> > + CommandPacket.QueueType = NVME_ADMIN_QUEUE;
> > + //
> > + // Set bit 0 (Cns bit) to 1 to identify a namespace
> > + //
> > + CommandPacket.NvmeCmd->Cdw10 = 0;
> > + CommandPacket.NvmeCmd->Flags = CDW10_VALID;
> > +
> > + Status = NvmePassThru (
> > + Private,
> > + NamespaceId,
> > + &CommandPacket
> > + );
> > + return Status;
> > +}
> > +
> > +/**
> > + Dump the Identify Controller data.
> > +
> > + @param[in] ControllerData The pointer to the
> > NVME_ADMIN_CONTROLLER_DATA data structure.
> > +
> > +**/
> > +VOID
> > +NvmeDumpControllerData (
> > + IN NVME_ADMIN_CONTROLLER_DATA *ControllerData
> > + )
> > +{
> > + UINT8 Sn[21];
> > + UINT8 Mn[41];
> > +
> > + CopyMem (Sn, ControllerData->Sn, sizeof (ControllerData->Sn));
> > + Sn[20] = 0;
> > + CopyMem (Mn, ControllerData->Mn, sizeof (ControllerData->Mn));
> > + Mn[40] = 0;
> > +
> > + DEBUG ((DEBUG_INFO, " == NVME IDENTIFY CONTROLLER DATA ==\n"));
> > + DEBUG ((DEBUG_INFO, " PCI VID : 0x%x\n", ControllerData->Vid));
> > + DEBUG ((DEBUG_INFO, " PCI SSVID : 0x%x\n", ControllerData->Ssvid));
> > + DEBUG ((DEBUG_INFO, " SN : %a\n", Sn));
> > + DEBUG ((DEBUG_INFO, " MN : %a\n", Mn));
> > + DEBUG ((DEBUG_INFO, " FR : 0x%lx\n", *((UINT64*)ControllerData-
> > >Fr)));
> > + DEBUG ((DEBUG_INFO, " RAB : 0x%x\n", ControllerData->Rab));
> > + DEBUG ((DEBUG_INFO, " IEEE : 0x%x\n", *(UINT32*)ControllerData-
> > >Ieee_oui));
> > + DEBUG ((DEBUG_INFO, " AERL : 0x%x\n", ControllerData->Aerl));
> > + DEBUG ((DEBUG_INFO, " SQES : 0x%x\n", ControllerData->Sqes));
> > + DEBUG ((DEBUG_INFO, " CQES : 0x%x\n", ControllerData->Cqes));
> > + DEBUG ((DEBUG_INFO, " NN : 0x%x\n", ControllerData->Nn));
> > + return;
> > +}
> > +
> > +/**
> > + Create IO completion queue.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > + @return EFI_SUCCESS Successfully create io completion queue.
> > + @return others Fail to create io completion queue.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeCreateIoCompletionQueue (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + )
> > +{
> > + EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET
> > CommandPacket;
> > + EDKII_PEI_NVM_EXPRESS_COMMAND Command;
> > + EDKII_PEI_NVM_EXPRESS_COMPLETION Completion;
> > + EFI_STATUS Status;
> > + NVME_ADMIN_CRIOCQ CrIoCq;
> > +
> > + ZeroMem (&CommandPacket,
> > sizeof(EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET));
> > + ZeroMem (&Command, sizeof(EDKII_PEI_NVM_EXPRESS_COMMAND));
> > + ZeroMem (&Completion,
> > sizeof(EDKII_PEI_NVM_EXPRESS_COMPLETION));
> > + ZeroMem (&CrIoCq, sizeof(NVME_ADMIN_CRIOCQ));
> > +
> > + CommandPacket.NvmeCmd = &Command;
> > + CommandPacket.NvmeCompletion = &Completion;
> > +
> > + Command.Cdw0.Opcode = NVME_ADMIN_CRIOCQ_CMD;
> > + Command.Cdw0.Cid = Private->Cid[NVME_ADMIN_QUEUE]++;
> > + CommandPacket.TransferBuffer = Private->CqBuffer[NVME_IO_QUEUE];
> > + CommandPacket.TransferLength = EFI_PAGE_SIZE;
> > + CommandPacket.CommandTimeout = NVME_GENERIC_TIMEOUT;
> > + CommandPacket.QueueType = NVME_ADMIN_QUEUE;
> > +
> > + CrIoCq.Qid = NVME_IO_QUEUE;
> > + CrIoCq.Qsize = NVME_CCQ_SIZE;
> > + CrIoCq.Pc = 1;
> > + CopyMem (&CommandPacket.NvmeCmd->Cdw10, &CrIoCq, sizeof
> > (NVME_ADMIN_CRIOCQ));
> > + CommandPacket.NvmeCmd->Flags = CDW10_VALID | CDW11_VALID;
> > +
> > + Status = NvmePassThru (
> > + Private,
> > + NVME_CONTROLLER_NSID,
> > + &CommandPacket
> > + );
> > + return Status;
> > +}
> > +
> > +/**
> > + Create IO submission queue.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > + @return EFI_SUCCESS Successfully create io submission queue.
> > + @return others Fail to create io submission queue.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeCreateIoSubmissionQueue (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + )
> > +{
> > + EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET
> > CommandPacket;
> > + EDKII_PEI_NVM_EXPRESS_COMMAND Command;
> > + EDKII_PEI_NVM_EXPRESS_COMPLETION Completion;
> > + EFI_STATUS Status;
> > + NVME_ADMIN_CRIOSQ CrIoSq;
> > +
> > + ZeroMem (&CommandPacket,
> > sizeof(EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET));
> > + ZeroMem (&Command, sizeof(EDKII_PEI_NVM_EXPRESS_COMMAND));
> > + ZeroMem (&Completion,
> > sizeof(EDKII_PEI_NVM_EXPRESS_COMPLETION));
> > + ZeroMem (&CrIoSq, sizeof(NVME_ADMIN_CRIOSQ));
> > +
> > + CommandPacket.NvmeCmd = &Command;
> > + CommandPacket.NvmeCompletion = &Completion;
> > +
> > + Command.Cdw0.Opcode = NVME_ADMIN_CRIOSQ_CMD;
> > + Command.Cdw0.Cid = Private->Cid[NVME_ADMIN_QUEUE]++;
> > + CommandPacket.TransferBuffer = Private->SqBuffer[NVME_IO_QUEUE];
> > + CommandPacket.TransferLength = EFI_PAGE_SIZE;
> > + CommandPacket.CommandTimeout = NVME_GENERIC_TIMEOUT;
> > + CommandPacket.QueueType = NVME_ADMIN_QUEUE;
> > +
> > + CrIoSq.Qid = NVME_IO_QUEUE;
> > + CrIoSq.Qsize = NVME_CSQ_SIZE;
> > + CrIoSq.Pc = 1;
> > + CrIoSq.Cqid = NVME_IO_QUEUE;
> > + CrIoSq.Qprio = 0;
> > + CopyMem (&CommandPacket.NvmeCmd->Cdw10, &CrIoSq, sizeof
> > (NVME_ADMIN_CRIOSQ));
> > + CommandPacket.NvmeCmd->Flags = CDW10_VALID | CDW11_VALID;
> > +
> > + Status = NvmePassThru (
> > + Private,
> > + NVME_CONTROLLER_NSID,
> > + &CommandPacket
> > + );
> > + return Status;
> > +}
> > +
> > +/**
> > + Initialize the Nvm Express controller.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > + @retval EFI_SUCCESS The NVM Express Controller is initialized
> > successfully.
> > + @retval Others A device error occurred while initializing the
> controller.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeControllerInit (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + )
> > +{
> > + EFI_STATUS Status;
> > + UINTN Index;
> > + NVME_AQA Aqa;
> > + NVME_ASQ Asq;
> > + NVME_ACQ Acq;
> > + NVME_VER Ver;
> > +
> > + //
> > + // Dump the NVME controller implementation version
> > + //
> > + NVME_GET_VER (Private, &Ver);
> > + DEBUG ((DEBUG_INFO, "NVME controller implementation
> > version: %d.%d\n", Ver.Mjr, Ver.Mnr));
> > +
> > + //
> > + // Read the controller Capabilities register and verify that the NVM
> > command set is supported
> > + //
> > + NVME_GET_CAP (Private, &Private->Cap);
> > + if (Private->Cap.Css != 0x01) {
> > + DEBUG ((DEBUG_ERROR, "%a: The NVME controller doesn't support
> > NVMe command set.\n", __FUNCTION__));
> > + return EFI_UNSUPPORTED;
> > + }
> > +
> > + //
> > + // Currently, the driver only supports 4k page size
> > + //
> > + if ((Private->Cap.Mpsmin + 12) > EFI_PAGE_SHIFT) {
> > + DEBUG ((DEBUG_ERROR, "%a: The driver doesn't support page size other
> > than 4K.\n", __FUNCTION__));
> > + ASSERT (FALSE);
> > + return EFI_UNSUPPORTED;
> > + }
> > +
> > + for (Index = 0; Index < NVME_MAX_QUEUES; Index++) {
> > + Private->Pt[Index] = 0;
> > + Private->Cid[Index] = 0;
> > + ZeroMem ((VOID *)(UINTN)(&Private->SqTdbl[Index]), sizeof
> > (NVME_SQTDBL));
> > + ZeroMem ((VOID *)(UINTN)(&Private->CqHdbl[Index]), sizeof
> > (NVME_CQHDBL));
> > + }
> > + ZeroMem (Private->Buffer, EFI_PAGE_SIZE * NVME_MEM_MAX_PAGES);
> > +
> > + //
> > + // Disable the NVME controller first
> > + //
> > + Status = NvmeDisableController (Private);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NvmeDisableController fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + return Status;
> > + }
> > +
> > + //
> > + // Set the number of entries in admin submission & completion queues
> > + //
> > + Aqa.Asqs = NVME_ASQ_SIZE;
> > + Aqa.Rsvd1 = 0;
> > + Aqa.Acqs = NVME_ACQ_SIZE;
> > + Aqa.Rsvd2 = 0;
> > +
> > + //
> > + // Address of admin submission & completion queues
> > + //
> > + Asq = (UINT64)(UINTN)(NVME_ASQ_BASE (Private) & ~0xFFF);
> > + Acq = (UINT64)(UINTN)(NVME_ACQ_BASE (Private) & ~0xFFF);
> > +
> > + //
> > + // Address of I/O submission & completion queues
> > + //
> > + Private->SqBuffer[0] = (NVME_SQ *)(UINTN)NVME_ASQ_BASE (Private);
> > // NVME_ADMIN_QUEUE
> > + Private->CqBuffer[0] = (NVME_CQ *)(UINTN)NVME_ACQ_BASE (Private);
> > // NVME_ADMIN_QUEUE
> > + Private->SqBuffer[1] = (NVME_SQ *)(UINTN)NVME_SQ_BASE (Private, 0);
> > // NVME_IO_QUEUE
> > + Private->CqBuffer[1] = (NVME_CQ *)(UINTN)NVME_CQ_BASE (Private, 0);
> > // NVME_IO_QUEUE
> > + DEBUG ((DEBUG_INFO, "Admin Submission Queue Size (Aqa.Asqs) =
> > [%08X]\n", Aqa.Asqs));
> > + DEBUG ((DEBUG_INFO, "Admin Completion Queue Size (Aqa.Acqs) =
> > [%08X]\n", Aqa.Acqs));
> > + DEBUG ((DEBUG_INFO, "Admin Submission Queue (SqBuffer[0]) =
> > [%08X]\n", Private->SqBuffer[0]));
> > + DEBUG ((DEBUG_INFO, "Admin Completion Queue (CqBuffer[0]) =
> > [%08X]\n", Private->CqBuffer[0]));
> > + DEBUG ((DEBUG_INFO, "I/O Submission Queue (SqBuffer[1]) =
> > [%08X]\n", Private->SqBuffer[1]));
> > + DEBUG ((DEBUG_INFO, "I/O Completion Queue (CqBuffer[1]) =
> > [%08X]\n", Private->CqBuffer[1]));
> > +
> > + //
> > + // Program admin queue attributes
> > + //
> > + NVME_SET_AQA (Private, &Aqa);
> > +
> > + //
> > + // Program admin submission & completion queues address
> > + //
> > + NVME_SET_ASQ (Private, &Asq);
> > + NVME_SET_ACQ (Private, &Acq);
> > +
> > + //
> > + // Enable the NVME controller
> > + //
> > + Status = NvmeEnableController (Private);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NvmeEnableController fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + return Status;
> > + }
> > +
> > + //
> > + // Get the Identify Controller data
> > + //
> > + if (Private->ControllerData == NULL) {
> > + Private->ControllerData = (NVME_ADMIN_CONTROLLER_DATA
> > *)AllocateZeroPool (sizeof (NVME_ADMIN_CONTROLLER_DATA));
> > + if (Private->ControllerData == NULL) {
> > + return EFI_OUT_OF_RESOURCES;
> > + }
> > + }
> > + Status = NvmeIdentifyController (Private, Private->ControllerData);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NvmeIdentifyController fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + return Status;
> > + }
> > + NvmeDumpControllerData (Private->ControllerData);
> > +
> > + //
> > + // Check the namespace number for storing the namespaces information
> > + //
> > + if (Private->ControllerData->Nn > MAX_UINT32 / sizeof
> > (PEI_NVME_NAMESPACE_INFO)) {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a: Number of Namespaces field in Identify Controller data not
> > supported by the driver.\n",
> > + __FUNCTION__
> > + ));
> > + return EFI_UNSUPPORTED;
> > + }
> > +
> > + //
> > + // Create one I/O completion queue and one I/O submission queue
> > + //
> > + Status = NvmeCreateIoCompletionQueue (Private);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: Create IO completion queue fail, Status -
> > %r\n", __FUNCTION__, Status));
> > + return Status;
> > + }
> > + Status = NvmeCreateIoSubmissionQueue (Private);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: Create IO submission queue fail, Status -
> > %r\n", __FUNCTION__, Status));
> > + }
> > +
> > + return Status;
> > +}
> > +
> > +/**
> > + Free the resources allocated by an NVME controller.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > +**/
> > +VOID
> > +NvmeFreeControllerResource (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + )
> > +{
> > + //
> > + // Free the controller data buffer
> > + //
> > + if (Private->ControllerData != NULL) {
> > + FreePool (Private->ControllerData);
> > + Private->ControllerData = NULL;
> > + }
> > +
> > + //
> > + // Free the DMA buffers
> > + //
> > + if (Private->Buffer != NULL) {
> > + IoMmuFreeBuffer (
> > + NVME_MEM_MAX_PAGES,
> > + Private->Buffer,
> > + Private->BufferMapping
> > + );
> > + Private->Buffer = NULL;
> > + }
> > +
> > + //
> > + // Free the namespaces information buffer
> > + //
> > + if (Private->NamespaceInfo != NULL) {
> > + FreePool (Private->NamespaceInfo);
> > + Private->NamespaceInfo = NULL;
> > + }
> > +
> > + //
> > + // Free the controller private data structure
> > + //
> > + FreePool (Private);
> > + return;
> > +}
> > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiHci.h
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiHci.h
> > new file mode 100644
> > index 0000000000..ff334e3e17
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiHci.h
> > @@ -0,0 +1,166 @@
> > +/** @file
> > + The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > + which follows NVM Express specification at PEI phase.
> > +
> > + Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions
> > + of the BSD License which accompanies this distribution. The
> > + full text of the license may be found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#ifndef _NVM_EXPRESS_PEI_HCI_H_
> > +#define _NVM_EXPRESS_PEI_HCI_H_
> > +
> > +//
> > +// NVME host controller registers operation definitions
> > +//
> > +#define NVME_GET_CAP(Private, Cap) NvmeMmioRead (Cap,
> > Private->MmioBase + NVME_CAP_OFFSET, sizeof (NVME_CAP))
> > +#define NVME_GET_CC(Private, Cc) NvmeMmioRead (Cc, Private-
> > >MmioBase + NVME_CC_OFFSET, sizeof (NVME_CC))
> > +#define NVME_SET_CC(Private, Cc) NvmeMmioWrite (Private-
> > >MmioBase + NVME_CC_OFFSET, Cc, sizeof (NVME_CC))
> > +#define NVME_GET_CSTS(Private, Csts) NvmeMmioRead (Csts,
> > Private->MmioBase + NVME_CSTS_OFFSET, sizeof (NVME_CSTS))
> > +#define NVME_GET_AQA(Private, Aqa) NvmeMmioRead (Aqa,
> > Private->MmioBase + NVME_AQA_OFFSET, sizeof (NVME_AQA))
> > +#define NVME_SET_AQA(Private, Aqa) NvmeMmioWrite (Private-
> > >MmioBase + NVME_AQA_OFFSET, Aqa, sizeof (NVME_AQA))
> > +#define NVME_GET_ASQ(Private, Asq) NvmeMmioRead (Asq,
> > Private->MmioBase + NVME_ASQ_OFFSET, sizeof (NVME_ASQ))
> > +#define NVME_SET_ASQ(Private, Asq) NvmeMmioWrite (Private-
> > >MmioBase + NVME_ASQ_OFFSET, Asq, sizeof (NVME_ASQ))
> > +#define NVME_GET_ACQ(Private, Acq) NvmeMmioRead (Acq,
> > Private->MmioBase + NVME_ACQ_OFFSET, sizeof (NVME_ACQ))
> > +#define NVME_SET_ACQ(Private, Acq) NvmeMmioWrite (Private-
> > >MmioBase + NVME_ACQ_OFFSET, Acq, sizeof (NVME_ACQ))
> > +#define NVME_GET_VER(Private, Ver) NvmeMmioRead (Ver,
> > Private->MmioBase + NVME_VER_OFFSET, sizeof (NVME_VER))
> > +#define NVME_SET_SQTDBL(Private, Qid, Sqtdbl) NvmeMmioWrite
> > (Private->MmioBase + NVME_SQTDBL_OFFSET(Qid, Private->Cap.Dstrd),
> > Sqtdbl, sizeof (NVME_SQTDBL))
> > +#define NVME_SET_CQHDBL(Private, Qid, Cqhdbl) NvmeMmioWrite
> > (Private->MmioBase + NVME_CQHDBL_OFFSET(Qid, Private->Cap.Dstrd),
> > Cqhdbl, sizeof (NVME_CQHDBL))
> > +
> > +//
> > +// Base memory address enum types
> > +//
> > +enum {
> > + BASEMEM_ASQ,
> > + BASEMEM_ACQ,
> > + BASEMEM_SQ,
> > + BASEMEM_CQ,
> > + BASEMEM_PRP,
> > + MAX_BASEMEM_COUNT
> > +};
> > +
> > +//
> > +// All of base memories are 4K(0x1000) alignment
> > +//
> > +#define ALIGN(v, a) (UINTN)((((v) - 1) | ((a) - 1)) + 1)
> > +#define NVME_MEM_BASE(Private) ((UINTN)(Private->Buffer))
> > +#define NVME_ASQ_BASE(Private) (ALIGN
> > (NVME_MEM_BASE(Private) + ((NvmeBaseMemPageOffset
> > (BASEMEM_ASQ)) * EFI_PAGE_SIZE), EFI_PAGE_SIZE))
> > +#define NVME_ACQ_BASE(Private) (ALIGN
> > (NVME_MEM_BASE(Private) + ((NvmeBaseMemPageOffset
> > (BASEMEM_ACQ)) * EFI_PAGE_SIZE), EFI_PAGE_SIZE))
> > +#define NVME_SQ_BASE(Private, Index) (ALIGN
> > (NVME_MEM_BASE(Private) + ((NvmeBaseMemPageOffset (BASEMEM_SQ)
> > + ((Index)*(NVME_MAX_QUEUES-1))) * EFI_PAGE_SIZE), EFI_PAGE_SIZE))
> > +#define NVME_CQ_BASE(Private, Index) (ALIGN
> > (NVME_MEM_BASE(Private) + ((NvmeBaseMemPageOffset (BASEMEM_CQ)
> > + ((Index)*(NVME_MAX_QUEUES-1))) * EFI_PAGE_SIZE), EFI_PAGE_SIZE))
> > +#define NVME_PRP_BASE(Private) (ALIGN
> > (NVME_MEM_BASE(Private) + ((NvmeBaseMemPageOffset
> > (BASEMEM_PRP)) * EFI_PAGE_SIZE), EFI_PAGE_SIZE))
> > +
> > +
> > +/**
> > + Transfer MMIO Data to memory.
> > +
> > + @param[in,out] MemBuffer Destination: Memory address.
> > + @param[in] MmioAddr Source: MMIO address.
> > + @param[in] Size Size for read.
> > +
> > + @retval EFI_SUCCESS MMIO read sucessfully.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeMmioRead (
> > + IN OUT VOID *MemBuffer,
> > + IN UINTN MmioAddr,
> > + IN UINTN Size
> > + );
> > +
> > +/**
> > + Transfer memory data to MMIO.
> > +
> > + @param[in,out] MmioAddr Destination: MMIO address.
> > + @param[in] MemBuffer Source: Memory address.
> > + @param[in] Size Size for write.
> > +
> > + @retval EFI_SUCCESS MMIO write sucessfully.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeMmioWrite (
> > + IN OUT UINTN MmioAddr,
> > + IN VOID *MemBuffer,
> > + IN UINTN Size
> > + );
> > +
> > +/**
> > + Get the page offset for specific NVME based memory.
> > +
> > + @param[in] BaseMemIndex The Index of BaseMem (0-based).
> > +
> > + @retval - The page count for specific BaseMem Index
> > +
> > +**/
> > +UINT32
> > +NvmeBaseMemPageOffset (
> > + IN UINTN BaseMemIndex
> > + );
> > +
> > +/**
> > + Disable the Nvm Express controller.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > + @return EFI_SUCCESS Successfully disable the controller.
> > + @return others Fail to disable the controller.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeDisableController (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + );
> > +
> > +/**
> > + Initialize the Nvm Express controller.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > + @retval EFI_SUCCESS The NVM Express Controller is initialized
> > successfully.
> > + @retval Others A device error occurred while initializing the
> controller.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeControllerInit (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + );
> > +
> > +/**
> > + Get specified identify namespace data.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > + @param[in] NamespaceId The specified namespace identifier.
> > + @param[in] Buffer The buffer used to store the identify namespace
> > data.
> > +
> > + @return EFI_SUCCESS Successfully get the identify namespace data.
> > + @return EFI_DEVICE_ERROR Fail to get the identify namespace data.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeIdentifyNamespace (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private,
> > + IN UINT32 NamespaceId,
> > + IN VOID *Buffer
> > + );
> > +
> > +/**
> > + Free the resources allocated by an NVME controller.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > +
> > +**/
> > +VOID
> > +NvmeFreeControllerResource (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private
> > + );
> > +
> > +#endif
> > diff --git
> > a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiPassThru.c
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiPassThru.c
> > new file mode 100644
> > index 0000000000..81ad01b7ee
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiPassThru.c
> > @@ -0,0 +1,628 @@
> > +/** @file
> > + The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > + which follows NVM Express specification at PEI phase.
> > +
> > + Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions
> > + of the BSD License which accompanies this distribution. The
> > + full text of the license may be found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#include "NvmExpressPei.h"
> > +
> > +/**
> > + Create PRP lists for Data transfer which is larger than 2 memory pages.
> > +
> > + @param[in] Private The pointer to the
> > PEI_NVME_CONTROLLER_PRIVATE_DATA data structure.
> > + @param[in] PhysicalAddr The physical base address of Data Buffer.
> > + @param[in] Pages The number of pages to be transfered.
> > +
> > + @retval The pointer Value to the first PRP List of the PRP lists.
> > +
> > +**/
> > +UINT64
> > +NvmeCreatePrpList (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private,
> > + IN EFI_PHYSICAL_ADDRESS PhysicalAddr,
> > + IN UINTN Pages
> > + )
> > +{
> > + UINTN PrpEntryNo;
> > + UINTN PrpListNo;
> > + UINT64 PrpListBase;
> > + VOID *PrpListHost;
> > + UINTN PrpListIndex;
> > + UINTN PrpEntryIndex;
> > + UINT64 Remainder;
> > + EFI_PHYSICAL_ADDRESS PrpListPhyAddr;
> > + UINTN Bytes;
> > + UINT8 *PrpEntry;
> > + EFI_PHYSICAL_ADDRESS NewPhyAddr;
> > +
> > + //
> > + // The number of Prp Entry in a memory page.
> > + //
> > + PrpEntryNo = EFI_PAGE_SIZE / sizeof (UINT64);
> > +
> > + //
> > + // Calculate total PrpList number.
> > + //
> > + PrpListNo = (UINTN) DivU64x64Remainder ((UINT64)Pages,
> > (UINT64)PrpEntryNo, &Remainder);
> > + if (Remainder != 0) {
> > + PrpListNo += 1;
> > + }
> > +
> > + if (PrpListNo > NVME_PRP_SIZE) {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a: The implementation only supports PrpList number up to 4."
> > + " But %d are needed here.\n",
> > + __FUNCTION__,
> > + PrpListNo
> > + ));
> > + return 0;
> > + }
> > + PrpListHost = (VOID *)(UINTN) NVME_PRP_BASE (Private);
> > +
> > + Bytes = EFI_PAGES_TO_SIZE (PrpListNo);
> > + PrpListPhyAddr = (UINT64)(UINTN)(PrpListHost);
> > +
> > + //
> > + // Fill all PRP lists except of last one.
> > + //
> > + ZeroMem (PrpListHost, Bytes);
> > + for (PrpListIndex = 0; PrpListIndex < PrpListNo - 1; ++PrpListIndex) {
> > + PrpListBase = (UINTN)PrpListHost + PrpListIndex * EFI_PAGE_SIZE;
> > +
> > + for (PrpEntryIndex = 0; PrpEntryIndex < PrpEntryNo; ++PrpEntryIndex) {
> > + PrpEntry = (UINT8 *)(UINTN) (PrpListBase + PrpEntryIndex *
> > sizeof(UINT64));
> > + if (PrpEntryIndex != PrpEntryNo - 1) {
> > + //
> > + // Fill all PRP entries except of last one.
> > + //
> > + CopyMem (PrpEntry, (VOID *)(UINTN) (&PhysicalAddr), sizeof
> > (UINT64));
> > + PhysicalAddr += EFI_PAGE_SIZE;
> > + } else {
> > + //
> > + // Fill last PRP entries with next PRP List pointer.
> > + //
> > + NewPhyAddr = (PrpListPhyAddr + (PrpListIndex + 1) * EFI_PAGE_SIZE);
> > + CopyMem (PrpEntry, (VOID *)(UINTN) (&NewPhyAddr), sizeof
> > (UINT64));
> > + }
> > + }
> > + }
> > +
> > + //
> > + // Fill last PRP list.
> > + //
> > + PrpListBase = (UINTN)PrpListHost + PrpListIndex * EFI_PAGE_SIZE;
> > + for (PrpEntryIndex = 0; PrpEntryIndex < ((Remainder != 0) ? Remainder :
> > PrpEntryNo); ++PrpEntryIndex) {
> > + PrpEntry = (UINT8 *)(UINTN) (PrpListBase + PrpEntryIndex *
> > sizeof(UINT64));
> > + CopyMem (PrpEntry, (VOID *)(UINTN) (&PhysicalAddr), sizeof (UINT64));
> > +
> > + PhysicalAddr += EFI_PAGE_SIZE;
> > + }
> > +
> > + return PrpListPhyAddr;
> > +}
> > +
> > +/**
> > + Check the execution status from a given completion queue entry.
> > +
> > + @param[in] Cq A pointer to the NVME_CQ item.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmeCheckCqStatus (
> > + IN NVME_CQ *Cq
> > + )
> > +{
> > + if (Cq->Sct == 0x0 && Cq->Sc == 0x0) {
> > + return EFI_SUCCESS;
> > + }
> > +
> > + DEBUG ((DEBUG_INFO, "Dump NVMe Completion Entry Status from
> > [0x%x]:\n", (UINTN)Cq));
> > + DEBUG ((
> > + DEBUG_INFO,
> > + " SQ Identifier : [0x%x], Phase Tag : [%d], Cmd Identifier : [0x%x]\n",
> > + Cq->Sqid,
> > + Cq->Pt,
> > + Cq->Cid
> > + ));
> > + DEBUG ((DEBUG_INFO, " Status Code Type : [0x%x], Status Code :
> > [0x%x]\n", Cq->Sct, Cq->Sc));
> > + DEBUG ((DEBUG_INFO, " NVMe Cmd Execution Result - "));
> > +
> > + switch (Cq->Sct) {
> > + case 0x0:
> > + switch (Cq->Sc) {
> > + case 0x0:
> > + DEBUG ((DEBUG_INFO, "Successful Completion\n"));
> > + return EFI_SUCCESS;
> > + case 0x1:
> > + DEBUG ((DEBUG_INFO, "Invalid Command Opcode\n"));
> > + break;
> > + case 0x2:
> > + DEBUG ((DEBUG_INFO, "Invalid Field in Command\n"));
> > + break;
> > + case 0x3:
> > + DEBUG ((DEBUG_INFO, "Command ID Conflict\n"));
> > + break;
> > + case 0x4:
> > + DEBUG ((DEBUG_INFO, "Data Transfer Error\n"));
> > + break;
> > + case 0x5:
> > + DEBUG ((DEBUG_INFO, "Commands Aborted due to Power Loss
> > Notification\n"));
> > + break;
> > + case 0x6:
> > + DEBUG ((DEBUG_INFO, "Internal Device Error\n"));
> > + break;
> > + case 0x7:
> > + DEBUG ((DEBUG_INFO, "Command Abort Requested\n"));
> > + break;
> > + case 0x8:
> > + DEBUG ((DEBUG_INFO, "Command Aborted due to SQ Deletion\n"));
> > + break;
> > + case 0x9:
> > + DEBUG ((DEBUG_INFO, "Command Aborted due to Failed Fused
> > Command\n"));
> > + break;
> > + case 0xA:
> > + DEBUG ((DEBUG_INFO, "Command Aborted due to Missing Fused
> > Command\n"));
> > + break;
> > + case 0xB:
> > + DEBUG ((DEBUG_INFO, "Invalid Namespace or Format\n"));
> > + break;
> > + case 0xC:
> > + DEBUG ((DEBUG_INFO, "Command Sequence Error\n"));
> > + break;
> > + case 0xD:
> > + DEBUG ((DEBUG_INFO, "Invalid SGL Last Segment Descriptor\n"));
> > + break;
> > + case 0xE:
> > + DEBUG ((DEBUG_INFO, "Invalid Number of SGL Descriptors\n"));
> > + break;
> > + case 0xF:
> > + DEBUG ((DEBUG_INFO, "Data SGL Length Invalid\n"));
> > + break;
> > + case 0x10:
> > + DEBUG ((DEBUG_INFO, "Metadata SGL Length Invalid\n"));
> > + break;
> > + case 0x11:
> > + DEBUG ((DEBUG_INFO, "SGL Descriptor Type Invalid\n"));
> > + break;
> > + case 0x80:
> > + DEBUG ((DEBUG_INFO, "LBA Out of Range\n"));
> > + break;
> > + case 0x81:
> > + DEBUG ((DEBUG_INFO, "Capacity Exceeded\n"));
> > + break;
> > + case 0x82:
> > + DEBUG ((DEBUG_INFO, "Namespace Not Ready\n"));
> > + break;
> > + case 0x83:
> > + DEBUG ((DEBUG_INFO, "Reservation Conflict\n"));
> > + break;
> > + }
> > + break;
> > +
> > + case 0x1:
> > + switch (Cq->Sc) {
> > + case 0x0:
> > + DEBUG ((DEBUG_INFO, "Completion Queue Invalid\n"));
> > + break;
> > + case 0x1:
> > + DEBUG ((DEBUG_INFO, "Invalid Queue Identifier\n"));
> > + break;
> > + case 0x2:
> > + DEBUG ((DEBUG_INFO, "Maximum Queue Size Exceeded\n"));
> > + break;
> > + case 0x3:
> > + DEBUG ((DEBUG_INFO, "Abort Command Limit Exceeded\n"));
> > + break;
> > + case 0x5:
> > + DEBUG ((DEBUG_INFO, "Asynchronous Event Request Limit
> > Exceeded\n"));
> > + break;
> > + case 0x6:
> > + DEBUG ((DEBUG_INFO, "Invalid Firmware Slot\n"));
> > + break;
> > + case 0x7:
> > + DEBUG ((DEBUG_INFO, "Invalid Firmware Image\n"));
> > + break;
> > + case 0x8:
> > + DEBUG ((DEBUG_INFO, "Invalid Interrupt Vector\n"));
> > + break;
> > + case 0x9:
> > + DEBUG ((DEBUG_INFO, "Invalid Log Page\n"));
> > + break;
> > + case 0xA:
> > + DEBUG ((DEBUG_INFO, "Invalid Format\n"));
> > + break;
> > + case 0xB:
> > + DEBUG ((DEBUG_INFO, "Firmware Application Requires Conventional
> > Reset\n"));
> > + break;
> > + case 0xC:
> > + DEBUG ((DEBUG_INFO, "Invalid Queue Deletion\n"));
> > + break;
> > + case 0xD:
> > + DEBUG ((DEBUG_INFO, "Feature Identifier Not Saveable\n"));
> > + break;
> > + case 0xE:
> > + DEBUG ((DEBUG_INFO, "Feature Not Changeable\n"));
> > + break;
> > + case 0xF:
> > + DEBUG ((DEBUG_INFO, "Feature Not Namespace Specific\n"));
> > + break;
> > + case 0x10:
> > + DEBUG ((DEBUG_INFO, "Firmware Application Requires NVM
> > Subsystem Reset\n"));
> > + break;
> > + case 0x80:
> > + DEBUG ((DEBUG_INFO, "Conflicting Attributes\n"));
> > + break;
> > + case 0x81:
> > + DEBUG ((DEBUG_INFO, "Invalid Protection Information\n"));
> > + break;
> > + case 0x82:
> > + DEBUG ((DEBUG_INFO, "Attempted Write to Read Only Range\n"));
> > + break;
> > + }
> > + break;
> > +
> > + case 0x2:
> > + switch (Cq->Sc) {
> > + case 0x80:
> > + DEBUG ((DEBUG_INFO, "Write Fault\n"));
> > + break;
> > + case 0x81:
> > + DEBUG ((DEBUG_INFO, "Unrecovered Read Error\n"));
> > + break;
> > + case 0x82:
> > + DEBUG ((DEBUG_INFO, "End-to-end Guard Check Error\n"));
> > + break;
> > + case 0x83:
> > + DEBUG ((DEBUG_INFO, "End-to-end Application Tag Check Error\n"));
> > + break;
> > + case 0x84:
> > + DEBUG ((DEBUG_INFO, "End-to-end Reference Tag Check Error\n"));
> > + break;
> > + case 0x85:
> > + DEBUG ((DEBUG_INFO, "Compare Failure\n"));
> > + break;
> > + case 0x86:
> > + DEBUG ((DEBUG_INFO, "Access Denied\n"));
> > + break;
> > + }
> > + break;
> > +
> > + default:
> > + DEBUG ((DEBUG_INFO, "Unknown error\n"));
> > + break;
> > + }
> > +
> > + return EFI_DEVICE_ERROR;
> > +}
> > +
> > +/**
> > + Sends an NVM Express Command Packet to an NVM Express controller or
> > namespace. This function only
> > + supports blocking execution of the command.
> > +
> > + @param[in] Private The pointer to the NVME_CONTEXT Data structure.
> > + @param[in] NamespaceId Is a 32 bit Namespace ID to which the Express
> > HCI command packet will
> > + be sent.
> > + A Value of 0 denotes the NVM Express controller, a Value
> of
> > all 0FFh in
> > + the namespace ID specifies that the command packet
> should
> > be sent to all
> > + valid namespaces.
> > + @param[in,out] Packet A pointer to the EDKII PEI NVM Express PassThru
> > Command Packet to send
> > + to the NVMe namespace specified by NamespaceId.
> > +
> > + @retval EFI_SUCCESS The EDKII PEI NVM Express Command Packet
> > was sent by the host.
> > + TransferLength bytes were transferred to, or from
> > DataBuffer.
> > + @retval EFI_NOT_READY The EDKII PEI NVM Express Command
> > Packet could not be sent because
> > + the controller is not ready. The caller may retry again
> later.
> > + @retval EFI_DEVICE_ERROR A device error occurred while attempting
> > to send the EDKII PEI NVM
> > + Express Command Packet.
> > + @retval EFI_INVALID_PARAMETER Namespace, or the contents of
> > EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET
> > + are invalid.
> > + The EDKII PEI NVM Express Command Packet was not
> sent,
> > so no
> > + additional status information is available.
> > + @retval EFI_UNSUPPORTED The command described by the EDKII PEI
> > NVM Express Command Packet
> > + is not supported by the host adapter.
> > + The EDKII PEI NVM Express Command Packet was not
> sent,
> > so no
> > + additional status information is available.
> > + @retval EFI_TIMEOUT A timeout occurred while waiting for the
> > EDKII PEI NVM Express Command
> > + Packet to execute.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmePassThru (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private,
> > + IN UINT32 NamespaceId,
> > + IN OUT EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET
> > *Packet
> > + )
> > +{
> > + EFI_STATUS Status;
> > + NVME_SQ *Sq;
> > + NVME_CQ *Cq;
> > + UINT8 QueueId;
> > + UINTN SqSize;
> > + UINTN CqSize;
> > + EDKII_IOMMU_OPERATION MapOp;
> > + UINTN MapLength;
> > + EFI_PHYSICAL_ADDRESS PhyAddr;
> > + VOID *MapData;
> > + VOID *MapMeta;
> > + UINT32 Bytes;
> > + UINT32 Offset;
> > + UINT32 Data32;
> > + UINT64 Timer;
> > +
> > + //
> > + // Check the data fields in Packet parameter
> > + //
> > + if (Packet == NULL) {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a, Invalid parameter: Packet(%lx)\n",
> > + __FUNCTION__,
> > + (UINTN)Packet
> > + ));
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + if ((Packet->NvmeCmd == NULL) || (Packet->NvmeCompletion == NULL))
> > {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a, Invalid parameter: NvmeCmd (%lx)/NvmeCompletion(%lx)\n",
> > + __FUNCTION__,
> > + (UINTN)Packet->NvmeCmd,
> > + (UINTN)Packet->NvmeCompletion
> > + ));
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + if (Packet->QueueType != NVME_ADMIN_QUEUE && Packet-
> > >QueueType != NVME_IO_QUEUE) {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a, Invalid parameter: QueueId(%lx)\n",
> > + __FUNCTION__,
> > + (UINTN)Packet->QueueType
> > + ));
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + QueueId = Packet->QueueType;
> > + Sq = Private->SqBuffer[QueueId] + Private->SqTdbl[QueueId].Sqt;
> > + Cq = Private->CqBuffer[QueueId] + Private->CqHdbl[QueueId].Cqh;
> > + if (QueueId == NVME_ADMIN_QUEUE) {
> > + SqSize = NVME_ASQ_SIZE + 1;
> > + CqSize = NVME_ACQ_SIZE + 1;
> > + } else {
> > + SqSize = NVME_CSQ_SIZE + 1;
> > + CqSize = NVME_CCQ_SIZE + 1;
> > + }
> > +
> > + if (Packet->NvmeCmd->Nsid != NamespaceId) {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a: Nsid mismatch (%x, %x)\n",
> > + __FUNCTION__,
> > + Packet->NvmeCmd->Nsid,
> > + NamespaceId
> > + ));
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + ZeroMem (Sq, sizeof (NVME_SQ));
> > + Sq->Opc = Packet->NvmeCmd->Cdw0.Opcode;
> > + Sq->Fuse = Packet->NvmeCmd->Cdw0.FusedOperation;
> > + Sq->Cid = Packet->NvmeCmd->Cdw0.Cid;
> > + Sq->Nsid = Packet->NvmeCmd->Nsid;
> > +
> > + //
> > + // Currently we only support PRP for data transfer, SGL is NOT supported
> > + //
> > + ASSERT (Sq->Psdt == 0);
> > + if (Sq->Psdt != 0) {
> > + DEBUG ((DEBUG_ERROR, "%a: Does not support SGL mechanism.\n",
> > __FUNCTION__));
> > + return EFI_UNSUPPORTED;
> > + }
> > +
> > + Sq->Prp[0] = (UINT64)(UINTN)Packet->TransferBuffer;
> > + Sq->Prp[1] = 0;
> > + MapData = NULL;
> > + MapMeta = NULL;
> > + Status = EFI_SUCCESS;
> > + //
> > + // If the NVMe cmd has data in or out, then mapping the user buffer to the
> > PCI controller
> > + // specific addresses.
> > + //
> > + if ((Sq->Opc & (BIT0 | BIT1)) != 0) {
> > + if ((Packet->TransferLength == 0) || (Packet->TransferBuffer == NULL)) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + //
> > + // Currently, we only support creating IO submission/completion queues
> > that are
> > + // allocated internally by the driver.
> > + //
> > + if ((Packet->QueueType == NVME_ADMIN_QUEUE) &&
> > + ((Sq->Opc == NVME_ADMIN_CRIOCQ_CMD) || (Sq->Opc ==
> > NVME_ADMIN_CRIOSQ_CMD))) {
> > + if ((Packet->TransferBuffer != Private->SqBuffer[NVME_IO_QUEUE])
> > &&
> > + (Packet->TransferBuffer != Private->CqBuffer[NVME_IO_QUEUE])) {
> > + DEBUG ((
> > + DEBUG_ERROR,
> > + "%a: Does not support external IO queues creation request.\n",
> > + __FUNCTION__
> > + ));
> > + return EFI_UNSUPPORTED;
> > + }
> > + } else {
> > + if ((Sq->Opc & BIT0) != 0) {
> > + MapOp = EdkiiIoMmuOperationBusMasterRead;
> > + } else {
> > + MapOp = EdkiiIoMmuOperationBusMasterWrite;
> > + }
> > +
> > + MapLength = Packet->TransferLength;
> > + Status = IoMmuMap (
> > + MapOp,
> > + Packet->TransferBuffer,
> > + &MapLength,
> > + &PhyAddr,
> > + &MapData
> > + );
> > + if (EFI_ERROR (Status) || (MapLength != Packet->TransferLength)) {
> > + Status = EFI_OUT_OF_RESOURCES;
> > + DEBUG ((DEBUG_ERROR, "%a: Fail to map data buffer.\n",
> > __FUNCTION__));
> > + goto Exit;
> > + }
> > +
> > + Sq->Prp[0] = PhyAddr;
> > +
> > + if((Packet->MetadataLength != 0) && (Packet->MetadataBuffer !=
> > NULL)) {
> > + MapLength = Packet->MetadataLength;
> > + Status = IoMmuMap (
> > + MapOp,
> > + Packet->MetadataBuffer,
> > + &MapLength,
> > + &PhyAddr,
> > + &MapMeta
> > + );
> > + if (EFI_ERROR (Status) || (MapLength != Packet->MetadataLength)) {
> > + Status = EFI_OUT_OF_RESOURCES;
> > + DEBUG ((DEBUG_ERROR, "%a: Fail to map meta data buffer.\n",
> > __FUNCTION__));
> > + goto Exit;
> > + }
> > + Sq->Mptr = PhyAddr;
> > + }
> > + }
> > + }
> > +
> > + //
> > + // If the Buffer Size spans more than two memory pages (page Size as
> > defined in CC.Mps),
> > + // then build a PRP list in the second PRP submission queue entry.
> > + //
> > + Offset = ((UINT32)Sq->Prp[0]) & (EFI_PAGE_SIZE - 1);
> > + Bytes = Packet->TransferLength;
> > +
> > + if ((Offset + Bytes) > (EFI_PAGE_SIZE * 2)) {
> > + //
> > + // Create PrpList for remaining Data Buffer.
> > + //
> > + PhyAddr = (Sq->Prp[0] + EFI_PAGE_SIZE) & ~(EFI_PAGE_SIZE - 1);
> > + Sq->Prp[1] = NvmeCreatePrpList (
> > + Private,
> > + PhyAddr,
> > + EFI_SIZE_TO_PAGES(Offset + Bytes) - 1
> > + );
> > + if (Sq->Prp[1] == 0) {
> > + Status = EFI_OUT_OF_RESOURCES;
> > + DEBUG ((DEBUG_ERROR, "%a: Create PRP list fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + goto Exit;
> > + }
> > +
> > + } else if ((Offset + Bytes) > EFI_PAGE_SIZE) {
> > + Sq->Prp[1] = (Sq->Prp[0] + EFI_PAGE_SIZE) & ~(EFI_PAGE_SIZE - 1);
> > + }
> > +
> > + if (Packet->NvmeCmd->Flags & CDW10_VALID) {
> > + Sq->Payload.Raw.Cdw10 = Packet->NvmeCmd->Cdw10;
> > + }
> > + if (Packet->NvmeCmd->Flags & CDW11_VALID) {
> > + Sq->Payload.Raw.Cdw11 = Packet->NvmeCmd->Cdw11;
> > + }
> > + if (Packet->NvmeCmd->Flags & CDW12_VALID) {
> > + Sq->Payload.Raw.Cdw12 = Packet->NvmeCmd->Cdw12;
> > + }
> > + if (Packet->NvmeCmd->Flags & CDW13_VALID) {
> > + Sq->Payload.Raw.Cdw13 = Packet->NvmeCmd->Cdw13;
> > + }
> > + if (Packet->NvmeCmd->Flags & CDW14_VALID) {
> > + Sq->Payload.Raw.Cdw14 = Packet->NvmeCmd->Cdw14;
> > + }
> > + if (Packet->NvmeCmd->Flags & CDW15_VALID) {
> > + Sq->Payload.Raw.Cdw15 = Packet->NvmeCmd->Cdw15;
> > + }
> > +
> > + //
> > + // Ring the submission queue doorbell.
> > + //
> > + Private->SqTdbl[QueueId].Sqt++;
> > + if (Private->SqTdbl[QueueId].Sqt == SqSize) {
> > + Private->SqTdbl[QueueId].Sqt = 0;
> > + }
> > + Data32 = ReadUnaligned32 ((UINT32 *)&Private->SqTdbl[QueueId]);
> > + Status = NVME_SET_SQTDBL (Private, QueueId, &Data32);
> > + if (EFI_ERROR (Status)) {
> > + DEBUG ((DEBUG_ERROR, "%a: NVME_SET_SQTDBL fail, Status - %r\n",
> > __FUNCTION__, Status));
> > + goto Exit;
> > + }
> > +
> > + //
> > + // Wait for completion queue to get filled in.
> > + //
> > + Status = EFI_TIMEOUT;
> > + Timer = 0;
> > + while (Timer < Packet->CommandTimeout) {
> > + if (Cq->Pt != Private->Pt[QueueId]) {
> > + Status = EFI_SUCCESS;
> > + break;
> > + }
> > +
> > + MicroSecondDelay (NVME_POLL_INTERVAL);
> > + Timer += NVME_POLL_INTERVAL;
> > + }
> > +
> > + if (Status == EFI_TIMEOUT) {
> > + //
> > + // Timeout occurs for an NVMe command, reset the controller to abort
> > the outstanding command
> > + //
> > + DEBUG ((DEBUG_ERROR, "%a: Timeout occurs for the PassThru
> > command.\n", __FUNCTION__));
> > + Status = NvmeControllerInit (Private);
> > + if (EFI_ERROR (Status)) {
> > + Status = EFI_DEVICE_ERROR;
> > + } else {
> > + //
> > + // Return EFI_TIMEOUT to indicate a timeout occurs for PassThru
> > command
> > + //
> > + Status = EFI_TIMEOUT;
> > + }
> > + goto Exit;
> > + }
> > +
> > + //
> > + // Move forward the Completion Queue head
> > + //
> > + Private->CqHdbl[QueueId].Cqh++;
> > + if (Private->CqHdbl[QueueId].Cqh == CqSize) {
> > + Private->CqHdbl[QueueId].Cqh = 0;
> > + Private->Pt[QueueId] ^= 1;
> > + }
> > +
> > + //
> > + // Copy the Respose Queue entry for this command to the callers
> > response buffer
> > + //
> > + CopyMem (Packet->NvmeCompletion, Cq, sizeof
> > (EDKII_PEI_NVM_EXPRESS_COMPLETION));
> > +
> > + //
> > + // Check the NVMe cmd execution result
> > + //
> > + Status = NvmeCheckCqStatus (Cq);
> > + NVME_SET_CQHDBL (Private, QueueId, &Private->CqHdbl[QueueId]);
> > +
> > +Exit:
> > + if (MapMeta != NULL) {
> > + IoMmuUnmap (MapMeta);
> > + }
> > +
> > + if (MapData != NULL) {
> > + IoMmuUnmap (MapData);
> > + }
> > +
> > + return Status;
> > +}
> > diff --git
> > a/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiPassThru.h
> > b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiPassThru.h
> > new file mode 100644
> > index 0000000000..96c748e1bf
> > --- /dev/null
> > +++ b/MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPeiPassThru.h
> > @@ -0,0 +1,107 @@
> > +/** @file
> > + The NvmExpressPei driver is used to manage non-volatile memory
> > subsystem
> > + which follows NVM Express specification at PEI phase.
> > +
> > + Copyright (c) 2018, Intel Corporation. All rights reserved.<BR>
> > +
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions
> > + of the BSD License which accompanies this distribution. The
> > + full text of the license may be found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#ifndef _NVM_EXPRESS_PEI_PASSTHRU_H_
> > +#define _NVM_EXPRESS_PEI_PASSTHRU_H_
> > +
> > +#define NVME_CONTROLLER_NSID 0
> > +
> > +typedef struct {
> > + UINT8 Opcode;
> > + UINT8 FusedOperation;
> > + #define NORMAL_CMD 0x00
> > + #define FUSED_FIRST_CMD 0x01
> > + #define FUSED_SECOND_CMD 0x02
> > + UINT16 Cid;
> > +} NVME_CDW0;
> > +
> > +typedef struct {
> > + NVME_CDW0 Cdw0;
> > + UINT8 Flags;
> > + #define CDW10_VALID 0x01
> > + #define CDW11_VALID 0x02
> > + #define CDW12_VALID 0x04
> > + #define CDW13_VALID 0x08
> > + #define CDW14_VALID 0x10
> > + #define CDW15_VALID 0x20
> > + UINT32 Nsid;
> > + UINT32 Cdw10;
> > + UINT32 Cdw11;
> > + UINT32 Cdw12;
> > + UINT32 Cdw13;
> > + UINT32 Cdw14;
> > + UINT32 Cdw15;
> > +} EDKII_PEI_NVM_EXPRESS_COMMAND;
> > +
> > +typedef struct {
> > + UINT32 Cdw0;
> > + UINT32 Cdw1;
> > + UINT32 Cdw2;
> > + UINT32 Cdw3;
> > +} EDKII_PEI_NVM_EXPRESS_COMPLETION;
> > +
> > +typedef struct {
> > + UINT64 CommandTimeout;
> > + VOID *TransferBuffer;
> > + UINT32 TransferLength;
> > + VOID *MetadataBuffer;
> > + UINT32 MetadataLength;
> > + UINT8 QueueType;
> > + EDKII_PEI_NVM_EXPRESS_COMMAND *NvmeCmd;
> > + EDKII_PEI_NVM_EXPRESS_COMPLETION *NvmeCompletion;
> > +} EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET;
> > +
> > +
> > +/**
> > + Sends an NVM Express Command Packet to an NVM Express controller or
> > namespace. This function only
> > + supports blocking execution of the command.
> > +
> > + @param[in] Private The pointer to the NVME_CONTEXT Data structure.
> > + @param[in] NamespaceId Is a 32 bit Namespace ID to which the Express
> > HCI command packet will
> > + be sent.
> > + A Value of 0 denotes the NVM Express controller, a Value
> of
> > all 0FFh in
> > + the namespace ID specifies that the command packet
> should
> > be sent to all
> > + valid namespaces.
> > + @param[in,out] Packet A pointer to the EDKII PEI NVM Express PassThru
> > Command Packet to send
> > + to the NVMe namespace specified by NamespaceId.
> > +
> > + @retval EFI_SUCCESS The EDKII PEI NVM Express Command Packet
> > was sent by the host.
> > + TransferLength bytes were transferred to, or from
> > DataBuffer.
> > + @retval EFI_NOT_READY The EDKII PEI NVM Express Command
> > Packet could not be sent because
> > + the controller is not ready. The caller may retry again
> later.
> > + @retval EFI_DEVICE_ERROR A device error occurred while attempting
> > to send the EDKII PEI NVM
> > + Express Command Packet.
> > + @retval EFI_INVALID_PARAMETER Namespace, or the contents of
> > EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET
> > + are invalid.
> > + The EDKII PEI NVM Express Command Packet was not
> sent,
> > so no
> > + additional status information is available.
> > + @retval EFI_UNSUPPORTED The command described by the EDKII PEI
> > NVM Express Command Packet
> > + is not supported by the host adapter.
> > + The EDKII PEI NVM Express Command Packet was not
> sent,
> > so no
> > + additional status information is available.
> > + @retval EFI_TIMEOUT A timeout occurred while waiting for the
> > EDKII PEI NVM Express Command
> > + Packet to execute.
> > +
> > +**/
> > +EFI_STATUS
> > +NvmePassThru (
> > + IN PEI_NVME_CONTROLLER_PRIVATE_DATA *Private,
> > + IN UINT32 NamespaceId,
> > + IN OUT EDKII_PEI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET
> > *Packet
> > + );
> > +
> > +#endif
> > diff --git a/MdeModulePkg/MdeModulePkg.dsc
> > b/MdeModulePkg/MdeModulePkg.dsc
> > index 18928f96d8..09b0f9f13d 100644
> > --- a/MdeModulePkg/MdeModulePkg.dsc
> > +++ b/MdeModulePkg/MdeModulePkg.dsc
> > @@ -233,6 +233,7 @@
> > MdeModulePkg/Bus/Pci/PciBusDxe/PciBusDxe.inf
> >
> > MdeModulePkg/Bus/Pci/IncompatiblePciDeviceSupportDxe/IncompatiblePci
> > DeviceSupportDxe.inf
> > MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressDxe.inf
> > + MdeModulePkg/Bus/Pci/NvmExpressPei/NvmExpressPei.inf
> > MdeModulePkg/Bus/Pci/SdMmcPciHcDxe/SdMmcPciHcDxe.inf
> > MdeModulePkg/Bus/Pci/SdMmcPciHcPei/SdMmcPciHcPei.inf
> > MdeModulePkg/Bus/Sd/EmmcBlockIoPei/EmmcBlockIoPei.inf
> > --
> > 2.12.0.windows.1
next prev parent reply other threads:[~2018-06-21 8:31 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-15 7:03 [PATCH 0/4] Add PEI BlockIo support for NVM Express devices Hao Wu
2018-06-15 7:03 ` [PATCH 1/4] MdeModulePkg: Add definitions for EDKII PEI NVME host controller PPI Hao Wu
2018-06-21 7:35 ` Ni, Ruiyu
2018-06-21 8:31 ` Wu, Hao A
2018-06-22 1:42 ` Zeng, Star
2018-06-15 7:03 ` [PATCH 2/4] MdeModulePkg/NvmExpressPei: Add the NVME device PEI BlockIo support Hao Wu
2018-06-21 7:45 ` Ni, Ruiyu
2018-06-21 8:31 ` Wu, Hao A [this message]
2018-06-22 1:43 ` Zeng, Star
2018-06-15 7:03 ` [PATCH 3/4] MdeModulePkg: Add GUID for recovery capsule on NVM Express devices Hao Wu
2018-06-21 7:54 ` Ni, Ruiyu
2018-06-22 1:42 ` Zeng, Star
2018-06-15 7:03 ` [PATCH 4/4] FatPkg/FatPei: Add the recognition of recovery capsule on NVME device Hao Wu
2018-06-21 7:52 ` Ni, Ruiyu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=B80AF82E9BFB8E4FBD8C89DA810C6A0931E0A2F3@SHSMSX104.ccr.corp.intel.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox