public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Brian J. Johnson" <brian.johnson@hpe.com>
To: Bill Paul <wpaul@windriver.com>, edk2-devel@lists.01.org
Cc: "Ni, Ruiyu" <ruiyu.ni@intel.com>, Laszlo Ersek <lersek@redhat.com>
Subject: Re: PciSegmentInfoLib instances
Date: Fri, 1 Jun 2018 15:04:52 -0500	[thread overview]
Message-ID: <68f3b93f-c170-7709-6510-7d4ceb3f0edb@hpe.com> (raw)
In-Reply-To: <201805281336.10118.wpaul@windriver.com>

On 05/28/2018 03:36 PM, Bill Paul wrote:
> Of all the gin joints in all the towns in all the world, Ni, Ruiyu had to walk
> into mine at 19:55 on Sunday 27 May 2018 and say:
> 
>> No. There is no such instance.
>>
>> My understanding:
>> Segment is just to separate the PCI devices to different groups.
>> Each group of devices use the continuous BUS/IO/MMIO resource.
>> Each group has a BASE PCIE address that can be used to access PCIE
>> configuration in MMIO way.
> 
> This makes it sound like an either/or design choice that a hardware designer
> can make simply for some kind of convenience, and I don't think that's the
> case.
> 
> Segments typically indicate completely separate host/PCI interfaces. For
> example, I've seen older Intel boards with both 32-bit/33MHz slots and 64-
> bit/66MHz slots. This was not done with a bridge: each set of slots was tied
> to a completely separate host PCI bridge and hence each was a separate
> segment. This was required in order to support legacy 32-bit/33MHz devices
> without forcing the 64-bit/66MHz devices down to 33MHz as well.
> 
> With PCIe, on platforms other than Intel, each root complex would also be a
> separate segment. Each root complex would have its own bus/dev/func namespace,
> its own configuration space access method, and its own portion of the physical
> address space into which to map BARs. This means that you could have two or
> more different devices with the same bus/dev/func identifier tuple, meaning
> they are not unique on a platform-wide basis.
> 
> At the hardware level, PCIe may be implemented similarly on Intel too, but
> they hide some of the details from you. The major difference is that even in
> cases where you may have multiple PCIe channels, they all share the same
> bus/dev/func namespace so that you can pretend the bus/dev/func tuples are
> unique platform-wide. The case where you would need to advertise multiple
> segments arises where there's some technical roadblock that prevents
> implementing this illusion of a single namespace in a completely transparent
> way.
> 
> In the case of the 32-bit/64-bit hybrid design I mentioned above, scanning the
> bus starting from bus0/dev0/func0 would only allow you to automatically
> discover the 32-bit devices because there was no bridge between the 32-bit and
> 64-bit spaces. The hardware allows you to issue configuration accesses to both
> spaces using the same 0xcf8/0xcfc registers, but in order to autodiscover the
> 64-bit devices, you needed know ahead of time to also scan starting at
> bus1/dev0/func0. But the only way to know to do that was to check the
> advertised segments in the ACPI device table and honor their starting bus
> numbers.
>   

FWIW, the scalable X64 machines from SGI/HPE implement multiple 
segments, generally one per socket.  That's the only way for us to 
represent a machine of the size we're interested in.  There are multiple 
config space ranges which aren't necessarily located at contiguous 
addresses, only one of which (on the legacy socket) is accessible via 
the legacy 0xcf8/0xcfc mechanism.  So you need to parse the ACPI tables 
to discover all the I/O.

We really wish all code used segments properly... we have had to convert 
a lot of 3rd party code to use PciSegmentLib rather than the old PciLib. 
  Thankfully OSes are in pretty good shape these days (was not always 
the case in the past.)

>> So with the above understanding, even a platform which has single segment
>> can be implemented as a multiple segments platform.
> 
> I would speculate this might only be true on Intel. :) Intel is the only
> platform that creates the illusion of a single bus/dev/func namespace for
> multiple PCI "hoses," and it only does that for backward compatibility
> purposes (i.e. to make Windows happy). Without that gimmick, each segment
> would be a separate tree rooted at bus0/dev0/func0, and there wouldn't be much
> point to doing that if you only had a single root complex.
> 
> -Bill
>   
>> Thanks/Ray
>>
>>> -----Original Message-----
>>> From: edk2-devel <edk2-devel-bounces@lists.01.org> On Behalf Of Laszlo
>>> Ersek
>>> Sent: Wednesday, May 23, 2018 3:38 PM
>>> To: Ni, Ruiyu <ruiyu.ni@intel.com>
>>> Cc: edk2-devel-01 <edk2-devel@lists.01.org>
>>> Subject: [edk2] PciSegmentInfoLib instances
>>>
>>> Hi Ray,
>>>
>>> do you know of any open source, non-Null, PciSegmentInfoLib instance?
>>> (Possibly outside of edk2?)
>>>
>>> More precisely, it's not the PciSegmentInfoLib instance itself that's of
>>> particular interest, but the hardware and the platform support code that
>>> offer multiple PCIe segments.
>>>
>>> Thanks
>>> Laszlo
>>> _______________________________________________
>>> edk2-devel mailing list
>>> edk2-devel@lists.01.org
>>> https://lists.01.org/mailman/listinfo/edk2-devel
>>
>> _______________________________________________
>> edk2-devel mailing list
>> edk2-devel@lists.01.org
>> https://lists.01.org/mailman/listinfo/edk2-devel


-- 
Brian J. Johnson
Enterprise X86 Lab

Hewlett Packard Enterprise



  reply	other threads:[~2018-06-01 20:04 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-23  7:37 PciSegmentInfoLib instances Laszlo Ersek
2018-05-28  2:55 ` Ni, Ruiyu
2018-05-28 20:36   ` Bill Paul
2018-06-01 20:04     ` Brian J. Johnson [this message]
2018-06-01 20:50     ` Andrew Fish

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=68f3b93f-c170-7709-6510-7d4ceb3f0edb@hpe.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox