* [PATCH v2 0/3] NvmExpressDxe: Bug fixes within NvmExpressPassThru() @ 2018-10-24 1:19 Hao Wu 2018-10-24 1:19 ` [PATCH v2 1/3] MdeModulePkg/NvmExpressDxe: Refine data buffer & len check in PassThru Hao Wu ` (2 more replies) 0 siblings, 3 replies; 5+ messages in thread From: Hao Wu @ 2018-10-24 1:19 UTC (permalink / raw) To: edk2-devel; +Cc: Hao Wu, Liangcheng Tang, Jiewen Yao, Ruiyu Ni, Star Zeng V2 changes: A. For patch 3/3, introduces a BOOLEAN flag in the controller private data structure. The flag will indicate internal IO queues creation and it will simplifies the check logic in the NvmExpressPassThru() function. V1 history: The series will address a couple of bugs within the NvmExpressPassThru() function. Please refer to the log messages of each commit for more details. Cc: Liangcheng Tang <liangcheng.tang@intel.com> Cc: Jiewen Yao <Jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Star Zeng <star.zeng@intel.com> Hao Wu (3): MdeModulePkg/NvmExpressDxe: Refine data buffer & len check in PassThru MdeModulePkg/NvmExpressDxe: Always copy CQ entry to PassThru packet MdeModulePkg/NvmExpressDxe: Refine PassThru IO queue creation behavior MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h | 7 +- MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c | 6 ++ MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c | 67 ++++++++++++-------- 3 files changed, 51 insertions(+), 29 deletions(-) -- 2.12.0.windows.1 ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v2 1/3] MdeModulePkg/NvmExpressDxe: Refine data buffer & len check in PassThru 2018-10-24 1:19 [PATCH v2 0/3] NvmExpressDxe: Bug fixes within NvmExpressPassThru() Hao Wu @ 2018-10-24 1:19 ` Hao Wu 2018-10-24 1:19 ` [PATCH v2 2/3] MdeModulePkg/NvmExpressDxe: Always copy CQ entry to PassThru packet Hao Wu 2018-10-24 1:19 ` [PATCH v2 3/3] MdeModulePkg/NvmExpressDxe: Refine PassThru IO queue creation behavior Hao Wu 2 siblings, 0 replies; 5+ messages in thread From: Hao Wu @ 2018-10-24 1:19 UTC (permalink / raw) To: edk2-devel; +Cc: Hao Wu, Liangcheng Tang, Star Zeng REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1142 According to the the NVM Express spec Revision 1.1, for some commands (like Get/Set Feature Command, Figure 89 & 90 of the spec), the Memory Buffer maybe optional although the command opcode indicates there is a data transfer between host & controller (Get/Set Feature Command, Figure 38 of the spec). Hence, this commit refine the checks for the 'TransferLength' and 'TransferBuffer' field of the EFI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET structure to address this issue. Cc: Liangcheng Tang <liangcheng.tang@intel.com> Cc: Star Zeng <star.zeng@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> --- MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c | 33 +++++++++++--------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c index 2468871322..bfcd349794 100644 --- a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c +++ b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c @@ -595,7 +595,8 @@ NvmExpressPassThru ( // if (((Sq->Opc & (BIT0 | BIT1)) != 0) && !((Packet->QueueType == NVME_ADMIN_QUEUE) && ((Sq->Opc == NVME_ADMIN_CRIOCQ_CMD) || (Sq->Opc == NVME_ADMIN_CRIOSQ_CMD)))) { - if ((Packet->TransferLength == 0) || (Packet->TransferBuffer == NULL)) { + if (((Packet->TransferLength != 0) && (Packet->TransferBuffer == NULL)) || + ((Packet->TransferLength == 0) && (Packet->TransferBuffer != NULL))) { return EFI_INVALID_PARAMETER; } @@ -605,21 +606,23 @@ NvmExpressPassThru ( Flag = EfiPciIoOperationBusMasterWrite; } - MapLength = Packet->TransferLength; - Status = PciIo->Map ( - PciIo, - Flag, - Packet->TransferBuffer, - &MapLength, - &PhyAddr, - &MapData - ); - if (EFI_ERROR (Status) || (Packet->TransferLength != MapLength)) { - return EFI_OUT_OF_RESOURCES; - } + if ((Packet->TransferLength != 0) && (Packet->TransferBuffer != NULL)) { + MapLength = Packet->TransferLength; + Status = PciIo->Map ( + PciIo, + Flag, + Packet->TransferBuffer, + &MapLength, + &PhyAddr, + &MapData + ); + if (EFI_ERROR (Status) || (Packet->TransferLength != MapLength)) { + return EFI_OUT_OF_RESOURCES; + } - Sq->Prp[0] = PhyAddr; - Sq->Prp[1] = 0; + Sq->Prp[0] = PhyAddr; + Sq->Prp[1] = 0; + } if((Packet->MetadataLength != 0) && (Packet->MetadataBuffer != NULL)) { MapLength = Packet->MetadataLength; -- 2.12.0.windows.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 2/3] MdeModulePkg/NvmExpressDxe: Always copy CQ entry to PassThru packet 2018-10-24 1:19 [PATCH v2 0/3] NvmExpressDxe: Bug fixes within NvmExpressPassThru() Hao Wu 2018-10-24 1:19 ` [PATCH v2 1/3] MdeModulePkg/NvmExpressDxe: Refine data buffer & len check in PassThru Hao Wu @ 2018-10-24 1:19 ` Hao Wu 2018-10-24 1:19 ` [PATCH v2 3/3] MdeModulePkg/NvmExpressDxe: Refine PassThru IO queue creation behavior Hao Wu 2 siblings, 0 replies; 5+ messages in thread From: Hao Wu @ 2018-10-24 1:19 UTC (permalink / raw) To: edk2-devel; +Cc: Hao Wu, Liangcheng Tang, Star Zeng REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1259 According to the the NVM Express spec Revision 1.1, for some commands, command-related information will be stored in the Dword 0 of the completion queue entry. One case is for the Get Features Command (Section 5.9.2 of the spec), Dword 0 of the completion queue entry may contain feature information. Hence, this commit will always copy the content of completion queue entry to the PassThru packet regardless of the execution result of the command. Cc: Liangcheng Tang <liangcheng.tang@intel.com> Cc: Star Zeng <star.zeng@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Hao Wu <hao.a.wu@intel.com> Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> --- MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c index bfcd349794..c52e960771 100644 --- a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c +++ b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c @@ -781,17 +781,16 @@ NvmExpressPassThru ( } else { Status = EFI_DEVICE_ERROR; // - // Copy the Respose Queue entry for this command to the callers response buffer - // - CopyMem(Packet->NvmeCompletion, Cq, sizeof(EFI_NVM_EXPRESS_COMPLETION)); - - // // Dump every completion entry status for debugging. // DEBUG_CODE_BEGIN(); NvmeDumpStatus(Cq); DEBUG_CODE_END(); } + // + // Copy the Respose Queue entry for this command to the callers response buffer + // + CopyMem(Packet->NvmeCompletion, Cq, sizeof(EFI_NVM_EXPRESS_COMPLETION)); } else { // // Timeout occurs for an NVMe command. Reset the controller to abort the -- 2.12.0.windows.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 3/3] MdeModulePkg/NvmExpressDxe: Refine PassThru IO queue creation behavior 2018-10-24 1:19 [PATCH v2 0/3] NvmExpressDxe: Bug fixes within NvmExpressPassThru() Hao Wu 2018-10-24 1:19 ` [PATCH v2 1/3] MdeModulePkg/NvmExpressDxe: Refine data buffer & len check in PassThru Hao Wu 2018-10-24 1:19 ` [PATCH v2 2/3] MdeModulePkg/NvmExpressDxe: Always copy CQ entry to PassThru packet Hao Wu @ 2018-10-24 1:19 ` Hao Wu 2018-10-24 4:08 ` Ni, Ruiyu 2 siblings, 1 reply; 5+ messages in thread From: Hao Wu @ 2018-10-24 1:19 UTC (permalink / raw) To: edk2-devel; +Cc: Hao Wu, Jiewen Yao, Ruiyu Ni, Star Zeng REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1260 For the PassThru() service of NVM Express Pass Through Protocol, the current implementation (function NvmExpressPassThru()) will only use the IO Completion/Submission queues created internally by this driver during the controller initialization process. Any other IO queues created will not be consumed. So the value is little to accept external IO Completion/Submission queue creation request. This commit will refine the behavior of function NvmExpressPassThru(), it will only accept driver internal IO queue creation commands and will return "EFI_UNSUPPORTED" for external ones. Cc: Jiewen Yao <Jiewen.yao@intel.com> Cc: Ruiyu Ni <ruiyu.ni@intel.com> Cc: Star Zeng <star.zeng@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Hao Wu <hao.a.wu@intel.com> --- MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h | 7 +++++- MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c | 6 +++++ MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c | 25 +++++++++++++------- 3 files changed, 29 insertions(+), 9 deletions(-) diff --git a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h index ad0d9b8966..fe7d37c118 100644 --- a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h +++ b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h @@ -3,7 +3,7 @@ NVM Express specification. (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR> - Copyright (c) 2013 - 2017, Intel Corporation. All rights reserved.<BR> + Copyright (c) 2013 - 2018, Intel Corporation. All rights reserved.<BR> This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -147,6 +147,11 @@ struct _NVME_CONTROLLER_PRIVATE_DATA { NVME_CQHDBL CqHdbl[NVME_MAX_QUEUES]; UINT16 AsyncSqHead; + // + // Flag to indicate internal IO queue creation. + // + BOOLEAN CreateIoQueue; + UINT8 Pt[NVME_MAX_QUEUES]; UINT16 Cid[NVME_MAX_QUEUES]; diff --git a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c index 421561f16d..4a070f3f13 100644 --- a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c +++ b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c @@ -584,6 +584,7 @@ NvmeCreateIoCompletionQueue ( UINT16 QueueSize; Status = EFI_SUCCESS; + Private->CreateIoQueue = TRUE; for (Index = 1; Index < NVME_MAX_QUEUES; Index++) { ZeroMem (&CommandPacket, sizeof(EFI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET)); @@ -627,6 +628,8 @@ NvmeCreateIoCompletionQueue ( } } + Private->CreateIoQueue = FALSE; + return Status; } @@ -653,6 +656,7 @@ NvmeCreateIoSubmissionQueue ( UINT16 QueueSize; Status = EFI_SUCCESS; + Private->CreateIoQueue = TRUE; for (Index = 1; Index < NVME_MAX_QUEUES; Index++) { ZeroMem (&CommandPacket, sizeof(EFI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET)); @@ -698,6 +702,8 @@ NvmeCreateIoSubmissionQueue ( } } + Private->CreateIoQueue = FALSE; + return Status; } diff --git a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c index c52e960771..78464ff422 100644 --- a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c +++ b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c @@ -587,14 +587,23 @@ NvmExpressPassThru ( } Sq->Prp[0] = (UINT64)(UINTN)Packet->TransferBuffer; - // - // If the NVMe cmd has data in or out, then mapping the user buffer to the PCI controller specific addresses. - // Note here we don't handle data buffer for CreateIOSubmitionQueue and CreateIOCompletionQueue cmds because - // these two cmds are special which requires their data buffer must support simultaneous access by both the - // processor and a PCI Bus Master. It's caller's responsbility to ensure this. - // - if (((Sq->Opc & (BIT0 | BIT1)) != 0) && - !((Packet->QueueType == NVME_ADMIN_QUEUE) && ((Sq->Opc == NVME_ADMIN_CRIOCQ_CMD) || (Sq->Opc == NVME_ADMIN_CRIOSQ_CMD)))) { + if ((Packet->QueueType == NVME_ADMIN_QUEUE) && + ((Sq->Opc == NVME_ADMIN_CRIOCQ_CMD) || (Sq->Opc == NVME_ADMIN_CRIOSQ_CMD))) { + // + // Currently, we only use the IO Completion/Submission queues created internally + // by this driver during controller initialization. Any other IO queues created + // will not be consumed here. The value is little to accept external IO queue + // creation requests, so here we will return EFI_UNSUPPORTED for external IO + // queue creation request. + // + if (!Private->CreateIoQueue) { + DEBUG ((DEBUG_ERROR, "NvmExpressPassThru: Does not support external IO queues creation request.\n")); + return EFI_UNSUPPORTED; + } + } else if ((Sq->Opc & (BIT0 | BIT1)) != 0) { + // + // If the NVMe cmd has data in or out, then mapping the user buffer to the PCI controller specific addresses. + // if (((Packet->TransferLength != 0) && (Packet->TransferBuffer == NULL)) || ((Packet->TransferLength == 0) && (Packet->TransferBuffer != NULL))) { return EFI_INVALID_PARAMETER; -- 2.12.0.windows.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 3/3] MdeModulePkg/NvmExpressDxe: Refine PassThru IO queue creation behavior 2018-10-24 1:19 ` [PATCH v2 3/3] MdeModulePkg/NvmExpressDxe: Refine PassThru IO queue creation behavior Hao Wu @ 2018-10-24 4:08 ` Ni, Ruiyu 0 siblings, 0 replies; 5+ messages in thread From: Ni, Ruiyu @ 2018-10-24 4:08 UTC (permalink / raw) To: Wu, Hao A, edk2-devel@lists.01.org; +Cc: Yao, Jiewen, Zeng, Star Reviewed-by: Ruiyu Ni <ruiyu.ni@intel.com> Thanks/Ray > -----Original Message----- > From: Wu, Hao A > Sent: Wednesday, October 24, 2018 9:20 AM > To: edk2-devel@lists.01.org > Cc: Wu, Hao A <hao.a.wu@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>; > Ni, Ruiyu <ruiyu.ni@intel.com>; Zeng, Star <star.zeng@intel.com> > Subject: [PATCH v2 3/3] MdeModulePkg/NvmExpressDxe: Refine PassThru > IO queue creation behavior > > REF:https://bugzilla.tianocore.org/show_bug.cgi?id=1260 > > For the PassThru() service of NVM Express Pass Through Protocol, the > current implementation (function NvmExpressPassThru()) will only use the > IO Completion/Submission queues created internally by this driver during the > controller initialization process. Any other IO queues created will not be > consumed. > > So the value is little to accept external IO Completion/Submission queue > creation request. This commit will refine the behavior of function > NvmExpressPassThru(), it will only accept driver internal IO queue creation > commands and will return "EFI_UNSUPPORTED" for external ones. > > Cc: Jiewen Yao <Jiewen.yao@intel.com> > Cc: Ruiyu Ni <ruiyu.ni@intel.com> > Cc: Star Zeng <star.zeng@intel.com> > Contributed-under: TianoCore Contribution Agreement 1.1 > Signed-off-by: Hao Wu <hao.a.wu@intel.com> > --- > MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h | 7 +++++- > MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c | 6 +++++ > MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c | 25 > +++++++++++++------- > 3 files changed, 29 insertions(+), 9 deletions(-) > > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h > b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h > index ad0d9b8966..fe7d37c118 100644 > --- a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h > +++ b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpress.h > @@ -3,7 +3,7 @@ > NVM Express specification. > > (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR> > - Copyright (c) 2013 - 2017, Intel Corporation. All rights reserved.<BR> > + Copyright (c) 2013 - 2018, Intel Corporation. All rights > + reserved.<BR> > This program and the accompanying materials > are licensed and made available under the terms and conditions of the BSD > License > which accompanies this distribution. The full text of the license may be > found at @@ -147,6 +147,11 @@ struct > _NVME_CONTROLLER_PRIVATE_DATA { > NVME_CQHDBL CqHdbl[NVME_MAX_QUEUES]; > UINT16 AsyncSqHead; > > + // > + // Flag to indicate internal IO queue creation. > + // > + BOOLEAN CreateIoQueue; > + > UINT8 Pt[NVME_MAX_QUEUES]; > UINT16 Cid[NVME_MAX_QUEUES]; > > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c > b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c > index 421561f16d..4a070f3f13 100644 > --- a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c > +++ b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressHci.c > @@ -584,6 +584,7 @@ NvmeCreateIoCompletionQueue ( > UINT16 QueueSize; > > Status = EFI_SUCCESS; > + Private->CreateIoQueue = TRUE; > > for (Index = 1; Index < NVME_MAX_QUEUES; Index++) { > ZeroMem (&CommandPacket, > sizeof(EFI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET)); > @@ -627,6 +628,8 @@ NvmeCreateIoCompletionQueue ( > } > } > > + Private->CreateIoQueue = FALSE; > + > return Status; > } > > @@ -653,6 +656,7 @@ NvmeCreateIoSubmissionQueue ( > UINT16 QueueSize; > > Status = EFI_SUCCESS; > + Private->CreateIoQueue = TRUE; > > for (Index = 1; Index < NVME_MAX_QUEUES; Index++) { > ZeroMem (&CommandPacket, > sizeof(EFI_NVM_EXPRESS_PASS_THRU_COMMAND_PACKET)); > @@ -698,6 +702,8 @@ NvmeCreateIoSubmissionQueue ( > } > } > > + Private->CreateIoQueue = FALSE; > + > return Status; > } > > diff --git a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c > b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c > index c52e960771..78464ff422 100644 > --- a/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c > +++ b/MdeModulePkg/Bus/Pci/NvmExpressDxe/NvmExpressPassthru.c > @@ -587,14 +587,23 @@ NvmExpressPassThru ( > } > > Sq->Prp[0] = (UINT64)(UINTN)Packet->TransferBuffer; > - // > - // If the NVMe cmd has data in or out, then mapping the user buffer to the > PCI controller specific addresses. > - // Note here we don't handle data buffer for CreateIOSubmitionQueue > and CreateIOCompletionQueue cmds because > - // these two cmds are special which requires their data buffer must > support simultaneous access by both the > - // processor and a PCI Bus Master. It's caller's responsbility to ensure this. > - // > - if (((Sq->Opc & (BIT0 | BIT1)) != 0) && > - !((Packet->QueueType == NVME_ADMIN_QUEUE) && ((Sq->Opc == > NVME_ADMIN_CRIOCQ_CMD) || (Sq->Opc == > NVME_ADMIN_CRIOSQ_CMD)))) { > + if ((Packet->QueueType == NVME_ADMIN_QUEUE) && > + ((Sq->Opc == NVME_ADMIN_CRIOCQ_CMD) || (Sq->Opc == > NVME_ADMIN_CRIOSQ_CMD))) { > + // > + // Currently, we only use the IO Completion/Submission queues created > internally > + // by this driver during controller initialization. Any other IO queues > created > + // will not be consumed here. The value is little to accept external IO > queue > + // creation requests, so here we will return EFI_UNSUPPORTED for > external IO > + // queue creation request. > + // > + if (!Private->CreateIoQueue) { > + DEBUG ((DEBUG_ERROR, "NvmExpressPassThru: Does not support > external IO queues creation request.\n")); > + return EFI_UNSUPPORTED; > + } > + } else if ((Sq->Opc & (BIT0 | BIT1)) != 0) { > + // > + // If the NVMe cmd has data in or out, then mapping the user buffer to > the PCI controller specific addresses. > + // > if (((Packet->TransferLength != 0) && (Packet->TransferBuffer == NULL)) > || > ((Packet->TransferLength == 0) && (Packet->TransferBuffer != NULL))) { > return EFI_INVALID_PARAMETER; > -- > 2.12.0.windows.1 ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-10-24 4:09 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2018-10-24 1:19 [PATCH v2 0/3] NvmExpressDxe: Bug fixes within NvmExpressPassThru() Hao Wu 2018-10-24 1:19 ` [PATCH v2 1/3] MdeModulePkg/NvmExpressDxe: Refine data buffer & len check in PassThru Hao Wu 2018-10-24 1:19 ` [PATCH v2 2/3] MdeModulePkg/NvmExpressDxe: Always copy CQ entry to PassThru packet Hao Wu 2018-10-24 1:19 ` [PATCH v2 3/3] MdeModulePkg/NvmExpressDxe: Refine PassThru IO queue creation behavior Hao Wu 2018-10-24 4:08 ` Ni, Ruiyu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox