From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=192.55.52.115; helo=mga14.intel.com; envelope-from=ruiyu.ni@intel.com; receiver=edk2-devel@lists.01.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id DE8BE21CB87BC for ; Mon, 1 Jan 2018 23:51:14 -0800 (PST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Jan 2018 23:56:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,495,1508828400"; d="scan'208";a="6795867" Received: from ray-dev.ccr.corp.intel.com (HELO [10.239.9.31]) ([10.239.9.31]) by fmsmga008.fm.intel.com with ESMTP; 01 Jan 2018 23:56:15 -0800 To: Guo Heyi , Ard Biesheuvel Cc: Eric Dong , Jason Zhang , "edk2-devel@lists.01.org" , linaro-uefi , Star Zeng References: <20171220151704.GA2482@iwish> <20171221082726.GA100076@SZX1000114654> <20171221091453.GB100076@SZX1000114654> <9272c2e4-cd44-b299-480a-bde9946dc3f4@Intel.com> <979e60dc-13b3-48fd-fa08-58d570791b0e@Intel.com> <20171226065050.GA29912@SZX1000114654> From: "Ni, Ruiyu" Message-ID: Date: Tue, 2 Jan 2018 15:56:14 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2 MIME-Version: 1.0 In-Reply-To: <20171226065050.GA29912@SZX1000114654> Subject: Re: [RFC] MdeModulePkg/PciHostBridge: Add address translation support X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Jan 2018 07:51:15 -0000 Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit On 12/26/2017 2:50 PM, Guo Heyi wrote: > Hi Ard, Ray, > > Have we come to the final conclusion? Or are we still waiting for more comments on this? Heyi, I think you can send out a draft version of changes for better understanding. > > Thanks, > > Gary > > On Thu, Dec 21, 2017 at 10:07:51AM +0000, Ard Biesheuvel wrote: >> On 21 December 2017 at 09:59, Ni, Ruiyu wrote: >>> On 12/21/2017 5:52 PM, Ard Biesheuvel wrote: >>>> >>>> On 21 December 2017 at 09:48, Ni, Ruiyu wrote: >>>>> >>>>> On 12/21/2017 5:14 PM, Guo Heyi wrote: >>>>>> >>>>>> >>>>>> On Thu, Dec 21, 2017 at 08:32:37AM +0000, Ard Biesheuvel wrote: >>>>>>> >>>>>>> >>>>>>> On 21 December 2017 at 08:27, Guo Heyi wrote: >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 20, 2017 at 03:26:45PM +0000, Ard Biesheuvel wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On 20 December 2017 at 15:17, gary guo wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 20, 2017 at 09:13:58AM +0000, Ard Biesheuvel wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Hi Heyi, >>>>>>>>>>> >>>>>>>>>>> On 20 December 2017 at 08:21, Heyi Guo wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> PCIe on some ARM platforms requires address translation, not only >>>>>>>>>>>> for >>>>>>>>>>>> legacy IO access, but also for 32bit memory BAR access as well. >>>>>>>>>>>> There >>>>>>>>>>>> will be "Address Translation Unit" or something similar in PCI >>>>>>>>>>>> host >>>>>>>>>>>> bridges to translation CPU address to PCI address and vice versa. >>>>>>>>>>>> So >>>>>>>>>>>> we think it may be useful to add address translation support to >>>>>>>>>>>> the >>>>>>>>>>>> generic PCI host bridge driver. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I agree. While unusual on a PC, it is quite common on other >>>>>>>>>>> architectures to have more complex non 1:1 topologies, which >>>>>>>>>>> currently >>>>>>>>>>> require a forked PciHostBridgeDxe driver with local changes >>>>>>>>>>> applied. >>>>>>>>>>> >>>>>>>>>>>> This RFC only contains one minor change to the definition of >>>>>>>>>>>> PciHostBridgeLib, and there certainly will be a lot of other >>>>>>>>>>>> changes >>>>>>>>>>>> to make it work, including: >>>>>>>>>>>> >>>>>>>>>>>> 1. Use CPU address for GCD space add and allocate operations, >>>>>>>>>>>> instead >>>>>>>>>>>> of PCI address; also IO space will be changed to memory space if >>>>>>>>>>>> translation exists. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> For I/O space, the translation should simply be applied to the I/O >>>>>>>>>>> range. I don't think it makes sense to use memory space here, given >>>>>>>>>>> that it is specific to architectures that lack native port I/O. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I made an assumption here that platforms supporting real port IO >>>>>>>>>> space >>>>>>>>>> do not need address translation, like IA32 and X64, and port IO >>>>>>>>>> translation implies the platform does not support real port IO >>>>>>>>>> space. >>>>>>>>>> >>>>>>>>> >>>>>>>>> This may be a reasonable assumption. But I still think it is better >>>>>>>>> not to encode any assumptions in the first place. >>>>>>>>> >>>>>>>>>> Indeed the assumption is not so "generic", so I'll agree if you >>>>>>>>>> recommend to support IO to IO translation as well. But I still hope >>>>>>>>>> to >>>>>>>>>> have IO to memory translation support in PCI host bridge driver, >>>>>>>>>> rather than in CPU IO protocol, since the faked IO space might only >>>>>>>>>> be >>>>>>>>>> used for PCI host bridge and we may have overlapping IO ranges for >>>>>>>>>> each host bridge, which is compatible with PCIe specification and >>>>>>>>>> PCIe >>>>>>>>>> ACPI description. >>>>>>>>>> >>>>>>>>> >>>>>>>>> That is fine. Under UEFI, these will translate to non-overlapping I/O >>>>>>>>> spaces in the CPU's view. Under the OS, this could be totally >>>>>>>>> different. >>>>>>>>> >>>>>>>>> >>>>>>>>> For example, >>>>>>>>> >>>>>>>>> RC0 IO 0x0000 .. 0xffff -> CPU 0x00000 .. 0x0ffff >>>>>>>>> RC1 IO 0x0000 .. 0xffff -> CPU 0x10000 .. 0x1ffff >>>>>>>>> >>>>>>>>> This is very similar to how MMIO translation works, and makes I/O >>>>>>>>> devices behind the host bridges uniquely addressable for drivers. >>>>>>>>> >>>>>>>>> For our understanding, could you share the host bridge configuration >>>>>>>>> that you are targetting? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> IO translation on one of our platforms is like below: >>>>>>>> >>>>>>>> PCI IO space CPU memory space >>>>>>>> 0x0000 .. 0xffff -> 0xafff0000 .. 0xafffffff >>>>>>>> (The sizes are always 0x10000 so I will omit the limit for others) >>>>>>>> 0x0000 .. 0xffff -> 0x8abff0000 >>>>>>>> 0x0000 .. 0xffff -> 0x8b7ff0000 >>>>>>>> ...... >>>>>>>> >>>>>>>> The translated addresses may be beyond 32bit address, will this >>>>>>>> violate IO space limitation? From EDK2 code, I didn't see such >>>>>>>> limitation for IO space. >>>>>>>> >>>>>>> >>>>>>> The MMIO address will not be used for I/O port addressing by the CPU. >>>>>>> The MMIO to IO translation is an implementation detail of your CpuIo2 >>>>>>> protocol implementation. >>>>>>> >>>>>>> So there will be two stacked translations, one for PCI I/O to CPU I/O, >>>>>>> and one for CPU I/O to CPU MMIO. The latter is transparent to the PCI >>>>>>> host bridge driver. >>>>>> >>>>>> >>>>>> >>>>>> Yes this should work. >>>>>> >>>>>> Hi Star, Eric and Ruiyu, >>>>>> >>>>>> Any comments on this RFC? >>>>> >>>>> >>>>> >>>>> Let me confirm my understanding: >>>>> The PciHostBridge core driver/library interface changes only >>>>> take care of the MMIO translation. >>>>> >>>>> Heyi you will implement a special CpuIo driver in your >>>>> platform code to take care of the IO to MMIO translation. >>>>> >>>>> But let me confirm, will you need to additional translate >>>>> the MMIO (translated from IO) to another MMIO using an offset? >>>>> If yes, will you handle that translation in your CpuIo driver? >>>>> >>>> >>>> Hi Ray, >>>> >>>> The issue is that several PCIe root complexes have colliding I/O ranges: >>> >>> >>> Ard, >>> The IO-MMIO mapping needs CPU support. I am not sure whether IA32 or >>> x64 supports. >>> But I guess ARM supports. right? >>> >> >> Yes. Exposing PCI I/O ranges via MMIO translation is specific to the >> CpuIo2 implementations we have for ARM. >> >>> Will all the IO part be implemented in ARM CpuIo2 protocol? >>> >> >> No. The CPU to PCI I/O translation needs to be in PciHostBridgeDxe, >> because it is in charge of allocating the GCD space, and without >> translation, those allocations will collide. Also, while perhaps >> non-existent in reality, it is imaginable that a host bridge could >> translate port I/O addresses between the two sides of the bridge, >> similar to how non 1:1 mapped PCI MMIO is handled. >> >> So just like MMIO, the port I/O address used by the CPU, and the port >> I/O address programmed into the device BAR could be subject to >> translation. >> >> This is not solvable in the CpuIo2 protocol, because without >> translation at the host bridge driver level, a CPU port I/O address is >> ambiguous: e.g., address 0x1000 may apply to each of the RCs. >> >> >>>> >>>>>>>> PCI IO space CPU memory space >>>>>>>> 0x0000 .. 0xffff -> 0xafff0000 .. 0xafffffff >>>>>>>> (The sizes are always 0x10000 so I will omit the limit for others) >>>>>>>> 0x0000 .. 0xffff -> 0x8abff0000 >>>>>>>> 0x0000 .. 0xffff -> 0x8b7ff0000 >>>> >>>> >>>> So the CPU view is different from the PCI view, and to create a single >>>> CPU I/O space where all I/O port ranges behind all host bridges are >>>> accessible, we need I/O translation for the CPU. This will result in >>>> an intermediate representation >>>> >>>>>>>> PCI IO space CPU IO space >>>>>>>> 0x0000 .. 0xffff -> 0x00000 .. 0x0ffff >>>>>>>> 0x0000 .. 0xffff -> 0x10000 .. 0x1ffff >>>>>>>> 0x0000 .. 0xffff -> 0x20000 .. 0x2ffff >>>> >>>> >>>> On top of that, given that ARM has no native port I/O instructions, we >>>> will need to implement MMIO to IO translation, but this can be >>>> implemented in the CpuIo2 protocol. >>>> _______________________________________________ >>>> edk2-devel mailing list >>>> edk2-devel@lists.01.org >>>> https://lists.01.org/mailman/listinfo/edk2-devel >>>> >>> >>> >>> -- >>> Thanks, >>> Ray > _______________________________________________ > edk2-devel mailing list > edk2-devel@lists.01.org > https://lists.01.org/mailman/listinfo/edk2-devel > -- Thanks, Ray