public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
To: "Ni, Ruiyu" <ruiyu.ni@intel.com>
Cc: Guo Heyi <heyi.guo@linaro.org>,
	linaro-uefi <linaro-uefi@lists.linaro.org>,
	"edk2-devel@lists.01.org" <edk2-devel@lists.01.org>,
	Star Zeng <star.zeng@intel.com>,  Eric Dong <eric.dong@intel.com>,
	Jason Zhang <jason.zhang@linaro.org>
Subject: Re: [RFC] MdeModulePkg/PciHostBridge: Add address translation support
Date: Thu, 21 Dec 2017 09:52:26 +0000	[thread overview]
Message-ID: <CAKv+Gu_d+pNqPFHQj4u1OjJ99Sy0OCwBeJEMMQ=Xzs=BP+3CuQ@mail.gmail.com> (raw)
In-Reply-To: <9272c2e4-cd44-b299-480a-bde9946dc3f4@Intel.com>

On 21 December 2017 at 09:48, Ni, Ruiyu <ruiyu.ni@intel.com> wrote:
> On 12/21/2017 5:14 PM, Guo Heyi wrote:
>>
>> On Thu, Dec 21, 2017 at 08:32:37AM +0000, Ard Biesheuvel wrote:
>>>
>>> On 21 December 2017 at 08:27, Guo Heyi <heyi.guo@linaro.org> wrote:
>>>>
>>>> On Wed, Dec 20, 2017 at 03:26:45PM +0000, Ard Biesheuvel wrote:
>>>>>
>>>>> On 20 December 2017 at 15:17, gary guo <heyi.guo@linaro.org> wrote:
>>>>>>
>>>>>> On Wed, Dec 20, 2017 at 09:13:58AM +0000, Ard Biesheuvel wrote:
>>>>>>>
>>>>>>> Hi Heyi,
>>>>>>>
>>>>>>> On 20 December 2017 at 08:21, Heyi Guo <heyi.guo@linaro.org> wrote:
>>>>>>>>
>>>>>>>> PCIe on some ARM platforms requires address translation, not only
>>>>>>>> for
>>>>>>>> legacy IO access, but also for 32bit memory BAR access as well.
>>>>>>>> There
>>>>>>>> will be "Address Translation Unit" or something similar in PCI host
>>>>>>>> bridges to translation CPU address to PCI address and vice versa. So
>>>>>>>> we think it may be useful to add address translation support to the
>>>>>>>> generic PCI host bridge driver.
>>>>>>>>
>>>>>>>
>>>>>>> I agree. While unusual on a PC, it is quite common on other
>>>>>>> architectures to have more complex non 1:1 topologies, which
>>>>>>> currently
>>>>>>> require a forked PciHostBridgeDxe driver with local changes applied.
>>>>>>>
>>>>>>>> This RFC only contains one minor change to the definition of
>>>>>>>> PciHostBridgeLib, and there certainly will be a lot of other changes
>>>>>>>> to make it work, including:
>>>>>>>>
>>>>>>>> 1. Use CPU address for GCD space add and allocate operations,
>>>>>>>> instead
>>>>>>>> of PCI address; also IO space will be changed to memory space if
>>>>>>>> translation exists.
>>>>>>>>
>>>>>>>
>>>>>>> For I/O space, the translation should simply be applied to the I/O
>>>>>>> range. I don't think it makes sense to use memory space here, given
>>>>>>> that it is specific to architectures that lack native port I/O.
>>>>>>>
>>>>>>
>>>>>> I made an assumption here that platforms supporting real port IO space
>>>>>> do not need address translation, like IA32 and X64, and port IO
>>>>>> translation implies the platform does not support real port IO space.
>>>>>>
>>>>>
>>>>> This may be a reasonable assumption. But I still think it is better
>>>>> not to encode any assumptions in the first place.
>>>>>
>>>>>> Indeed the assumption is not so "generic", so I'll agree if you
>>>>>> recommend to support IO to IO translation as well. But I still hope to
>>>>>> have IO to memory translation support in PCI host bridge driver,
>>>>>> rather than in CPU IO protocol, since the faked IO space might only be
>>>>>> used for PCI host bridge and we may have overlapping IO ranges for
>>>>>> each host bridge, which is compatible with PCIe specification and PCIe
>>>>>> ACPI description.
>>>>>>
>>>>>
>>>>> That is fine. Under UEFI, these will translate to non-overlapping I/O
>>>>> spaces in the CPU's view. Under the OS, this could be totally
>>>>> different.
>>>>>
>>>>>
>>>>> For example,
>>>>>
>>>>> RC0 IO 0x0000 .. 0xffff -> CPU 0x00000 .. 0x0ffff
>>>>> RC1 IO 0x0000 .. 0xffff -> CPU 0x10000 .. 0x1ffff
>>>>>
>>>>> This is very similar to how MMIO translation works, and makes I/O
>>>>> devices behind the host bridges uniquely addressable for drivers.
>>>>>
>>>>> For our understanding, could you share the host bridge configuration
>>>>> that you are targetting?
>>>>
>>>>
>>>> IO translation on one of our platforms is like below:
>>>>
>>>> PCI IO space        CPU memory space
>>>> 0x0000 .. 0xffff -> 0xafff0000 .. 0xafffffff
>>>> (The sizes are always 0x10000 so I will omit the limit for others)
>>>> 0x0000 .. 0xffff -> 0x8abff0000
>>>> 0x0000 .. 0xffff -> 0x8b7ff0000
>>>> ......
>>>>
>>>> The translated addresses may be beyond 32bit address, will this
>>>> violate IO space limitation? From EDK2 code, I didn't see such
>>>> limitation for IO space.
>>>>
>>>
>>> The MMIO address will not be used for I/O port addressing by the CPU.
>>> The MMIO to IO translation is an implementation detail of your CpuIo2
>>> protocol implementation.
>>>
>>> So there will be two stacked translations, one for PCI I/O to CPU I/O,
>>> and one for CPU I/O to CPU MMIO. The latter is transparent to the PCI
>>> host bridge driver.
>>
>>
>> Yes this should work.
>>
>> Hi Star, Eric and Ruiyu,
>>
>> Any comments on this RFC?
>
>
> Let me confirm my understanding:
> The PciHostBridge core driver/library interface changes only
> take care of the MMIO translation.
>
> Heyi you will implement a special CpuIo driver in your
> platform code to take care of the IO to MMIO translation.
>
> But let me confirm, will you need to additional translate
> the MMIO (translated from IO) to another MMIO using an offset?
> If yes, will you handle that translation in your CpuIo driver?
>

Hi Ray,

The issue is that several PCIe root complexes have colliding I/O ranges:

>>>> PCI IO space        CPU memory space
>>>> 0x0000 .. 0xffff -> 0xafff0000 .. 0xafffffff
>>>> (The sizes are always 0x10000 so I will omit the limit for others)
>>>> 0x0000 .. 0xffff -> 0x8abff0000
>>>> 0x0000 .. 0xffff -> 0x8b7ff0000

So the CPU view is different from the PCI view, and to create a single
CPU I/O space where all I/O port ranges behind all host bridges are
accessible, we need I/O translation for the CPU. This will result in
an intermediate representation

>>>> PCI IO space        CPU IO space
>>>> 0x0000 .. 0xffff -> 0x00000 .. 0x0ffff
>>>> 0x0000 .. 0xffff -> 0x10000 .. 0x1ffff
>>>> 0x0000 .. 0xffff -> 0x20000 .. 0x2ffff

On top of that, given that ARM has no native port I/O instructions, we
will need to implement MMIO to IO translation, but this can be
implemented in the CpuIo2 protocol.


  reply	other threads:[~2017-12-21  9:47 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-20  8:21 [RFC] MdeModulePkg/PciHostBridge: Add address translation support Heyi Guo
2017-12-20  9:13 ` Ard Biesheuvel
2017-12-20 15:17   ` gary guo
2017-12-20 15:26     ` Ard Biesheuvel
2017-12-21  8:27       ` Guo Heyi
2017-12-21  8:32         ` Ard Biesheuvel
2017-12-21  9:14           ` Guo Heyi
2017-12-21  9:48             ` Ni, Ruiyu
2017-12-21  9:52               ` Ard Biesheuvel [this message]
2017-12-21  9:59                 ` Ni, Ruiyu
2017-12-21 10:07                   ` Ard Biesheuvel
2017-12-26  6:50                     ` Guo Heyi
2018-01-02  7:56                       ` Ni, Ruiyu
2018-01-02 15:09                         ` gary guo
2017-12-21  9:43       ` Ni, Ruiyu
2017-12-21 11:32         ` Guo Heyi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKv+Gu_d+pNqPFHQj4u1OjJ99Sy0OCwBeJEMMQ=Xzs=BP+3CuQ@mail.gmail.com' \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox