public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: Andrew Fish <afish@apple.com>
To: Bill Paul <wpaul@windriver.com>
Cc: edk2-devel@lists.01.org, "Ni, Ruiyu" <ruiyu.ni@intel.com>,
	Laszlo Ersek <lersek@redhat.com>
Subject: Re: PciSegmentInfoLib instances
Date: Fri, 01 Jun 2018 13:50:10 -0700	[thread overview]
Message-ID: <5711A2C2-F8D1-452D-917C-107C680A78C3@apple.com> (raw)
In-Reply-To: <201805281336.10118.wpaul@windriver.com>



> On May 28, 2018, at 1:36 PM, Bill Paul <wpaul@windriver.com> wrote:
> 
> Of all the gin joints in all the towns in all the world, Ni, Ruiyu had to walk 
> into mine at 19:55 on Sunday 27 May 2018 and say:
> 
>> No. There is no such instance.
>> 
>> My understanding:
>> Segment is just to separate the PCI devices to different groups.
>> Each group of devices use the continuous BUS/IO/MMIO resource.
>> Each group has a BASE PCIE address that can be used to access PCIE
>> configuration in MMIO way.
> 
> This makes it sound like an either/or design choice that a hardware designer 
> can make simply for some kind of convenience, and I don't think that's the 
> case.
> 

Bill,

I thought Ray was saying that a segmented API works on a non segmented architecture. I think that is a true statement. 

A single segment could also be comprised of multiple PCI host bridges. This was common on the old Intel PCI server chipsets. The Intel client chipsets hid everything behind PCI to PCI bridges and default PC behavior (OxCF8/0xCFC). 

I was not clear from your comment are you saying there is a place we are passing around bus/dev.func and it is broken? The device paths are covered since they start with an ACPI node, and they only store dev/func so you always get the same result even if buses get added. For a spec and implementation point of view we tried hard to make sure we did not have any limiting assumptions. 

Thanks,

Andrew Fish


> Segments typically indicate completely separate host/PCI interfaces. For 
> example, I've seen older Intel boards with both 32-bit/33MHz slots and 64-
> bit/66MHz slots. This was not done with a bridge: each set of slots was tied 
> to a completely separate host PCI bridge and hence each was a separate 
> segment. This was required in order to support legacy 32-bit/33MHz devices 
> without forcing the 64-bit/66MHz devices down to 33MHz as well.
> 
> With PCIe, on platforms other than Intel, each root complex would also be a 
> separate segment. Each root complex would have its own bus/dev/func namespace, 
> its own configuration space access method, and its own portion of the physical 
> address space into which to map BARs. This means that you could have two or 
> more different devices with the same bus/dev/func identifier tuple, meaning 
> they are not unique on a platform-wide basis. 
> 
> At the hardware level, PCIe may be implemented similarly on Intel too, but 
> they hide some of the details from you. The major difference is that even in 
> cases where you may have multiple PCIe channels, they all share the same 
> bus/dev/func namespace so that you can pretend the bus/dev/func tuples are 
> unique platform-wide. The case where you would need to advertise multiple 
> segments arises where there's some technical roadblock that prevents 
> implementing this illusion of a single namespace in a completely transparent 
> way.
> 
> In the case of the 32-bit/64-bit hybrid design I mentioned above, scanning the 
> bus starting from bus0/dev0/func0 would only allow you to automatically 
> discover the 32-bit devices because there was no bridge between the 32-bit and 
> 64-bit spaces. The hardware allows you to issue configuration accesses to both 
> spaces using the same 0xcf8/0xcfc registers, but in order to autodiscover the 
> 64-bit devices, you needed know ahead of time to also scan starting at 
> bus1/dev0/func0. But the only way to know to do that was to check the 
> advertised segments in the ACPI device table and honor their starting bus 
> numbers.
> 
>> So with the above understanding, even a platform which has single segment
>> can be implemented as a multiple segments platform.
> 
> I would speculate this might only be true on Intel. :) Intel is the only 
> platform that creates the illusion of a single bus/dev/func namespace for 
> multiple PCI "hoses," and it only does that for backward compatibility 
> purposes (i.e. to make Windows happy). Without that gimmick, each segment 
> would be a separate tree rooted at bus0/dev0/func0, and there wouldn't be much 
> point to doing that if you only had a single root complex.
> 
> -Bill
> 
>> Thanks/Ray
>> 
>>> -----Original Message-----
>>> From: edk2-devel <edk2-devel-bounces@lists.01.org> On Behalf Of Laszlo
>>> Ersek
>>> Sent: Wednesday, May 23, 2018 3:38 PM
>>> To: Ni, Ruiyu <ruiyu.ni@intel.com>
>>> Cc: edk2-devel-01 <edk2-devel@lists.01.org>
>>> Subject: [edk2] PciSegmentInfoLib instances
>>> 
>>> Hi Ray,
>>> 
>>> do you know of any open source, non-Null, PciSegmentInfoLib instance?
>>> (Possibly outside of edk2?)
>>> 
>>> More precisely, it's not the PciSegmentInfoLib instance itself that's of
>>> particular interest, but the hardware and the platform support code that
>>> offer multiple PCIe segments.
>>> 
>>> Thanks
>>> Laszlo
>>> _______________________________________________
>>> edk2-devel mailing list
>>> edk2-devel@lists.01.org
>>> https://lists.01.org/mailman/listinfo/edk2-devel
>> 
>> _______________________________________________
>> edk2-devel mailing list
>> edk2-devel@lists.01.org
>> https://lists.01.org/mailman/listinfo/edk2-devel
> -- 
> =============================================================================
> -Bill Paul            (510) 749-2329 | Senior Member of Technical Staff,
>                 wpaul@windriver.com <mailto:wpaul@windriver.com> | Master of Unix-Fu - Wind River Systems
> =============================================================================
>   "I put a dollar in a change machine. Nothing changed." - George Carlin
> =============================================================================
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org <mailto:edk2-devel@lists.01.org>
> https://lists.01.org/mailman/listinfo/edk2-devel <https://lists.01.org/mailman/listinfo/edk2-devel>


      parent reply	other threads:[~2018-06-01 20:50 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-23  7:37 PciSegmentInfoLib instances Laszlo Ersek
2018-05-28  2:55 ` Ni, Ruiyu
2018-05-28 20:36   ` Bill Paul
2018-06-01 20:04     ` Brian J. Johnson
2018-06-01 20:50     ` Andrew Fish [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5711A2C2-F8D1-452D-917C-107C680A78C3@apple.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox