public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Kinney, Michael D" <michael.d.kinney@intel.com>
To: "Johnson, Brian (EXL - Eagan)" <brian.johnson@hpe.com>,
	Bill Paul <wpaul@windriver.com>,
	"edk2-devel@ml01.01.org" <edk2-devel@ml01.01.org>,
	"Kinney, Michael D" <michael.d.kinney@intel.com>
Cc: "edk2-devel@ml01.01.org" <edk2-devel@ml01.01.org>,
	Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
Date: Wed, 31 May 2017 18:07:07 +0000	[thread overview]
Message-ID: <E92EE9817A31E24EB0585FDF735412F5A7D22F91@ORSMSX113.amr.corp.intel.com> (raw)
In-Reply-To: <DF4PR84MB01559FA436208C101C8A4CBEE1F10@DF4PR84MB0155.NAMPRD84.PROD.OUTLOOK.COM>

There may also be some PCI terminology differences.

The EDK II implements the PI Specifications.  Latest PI version:

  http://www.uefi.org/sites/default/files/resources/PI%201.5.zip

Volume 5 - Chapter 10 PCI Host Bridge provides the terminology that is used
for the EDK II implementation.

Specifically, Section 10.4 provides a discussion of the following example
PCI Architectures:

* Desktop system with 1 PCI root bridge
* Server system with 4 PCI root bridges
* Server system with 2 PCI segments
* Server system with 2 PCI host buses

One additional element I have noticed, is the difference between a HW
description of the platform's PCI subsystem, and the SW view of the 
platform's PCI subsystem.

Mike


-----Original Message-----
From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Johnson, Brian (EXL - Eagan)
Sent: Wednesday, May 31, 2017 8:02 AM
To: Bill Paul <wpaul@windriver.com>; edk2-devel@ml01.01.org
Cc: edk2-devel@ml01.01.org; Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>; Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform



-----Original Message-----
From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Bill Paul
Sent: Tuesday, May 30, 2017 12:34 PM
To: edk2-devel@ml01.01.org
Cc: edk2-devel@ml01.01.org; Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>; Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform

Of all the gin joints in all the towns in all the world, Vladimir Olovyannikov had to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say:

> > -----Original Message-----
> > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> > Sent: May-30-17 9:35 AM
> > To: Vladimir Olovyannikov
> > Cc: edk2-devel@lists.01.org
> > Subject: Re: Using a generic PciHostBridgeDxe driver for a 
> > multi-PCIe-domain platform
> > 
> > On 30 May 2017 at 16:23, Vladimir Olovyannikov
> > 
> > <vladimir.olovyannikov@broadcom.com> wrote:
> > > Hi,
> > > 
> > > I've started PCIe stack implementation design for an armv8 aarch64 
> > > platform.
> > > The platform's PCIe represents several host bridges, and each 
> > > hostbridge has one rootbridge.
> > > They do not share any resources between each other.
> > > Looking into the PciHostBridgeDxe implementation I can see that it 
> > > supports only one hostbridge, and there is a comment:
> > > // Most systems in the world including complex servers have only 
> > > one Host Bridge.
> > > 
> > > So in my case should I create my own PciHostBridgeDxe driver 
> > > supporting multiple hostbridges and do not use the Industry 
> > > standard
> > 
> > driver?
> > 
> > > I am very new to it, and will appreciate any help or idea.
> > 
> > As far as I can tell, PciHostBridgeLib allows you to return an 
> > arbitrary number of PCI host bridges, each with their own segment 
> > number. I haven't tried it myself, but it is worth a try whether 
> > returning an array of all host bridges on your platform works as 
> > expected.
> 
> Thank you Ard,
> Right, but PciHostBridgeDxe seems to work with one hostbridge.
> I am confused that
> 
> // Make sure all root bridges share the same ResourceAssigned value
> 
> The rootbridges are independent on the platform, and should not share 
> anything. Or am I missing anything?
> Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib.

This may be an Intel-ism.

Note that for PCIe, I typically refer to "host bridges" as root complexes.

On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548, P2020, P4080, T4240) there are often several root complexes. A typical board design may have several PCIe slots where each slot is connected to one root complex in the SoC. Each root complex is therefore the parent of a separate "segment" 
which has its own unique bus/dev/func space. Each root complex has its own bank of registers to control it, including a separate set of configuration space access registers. This means you can have multiple PCIe trees each with its own bus0/dev0/func0 root. There can therefore be several devices with the same bus/dev/func tuple, but which reside on separate segments.

The ARMv8 board you're working with is probably set up the same way. I've only worked with ARM Cortex-A boards and those have all had just one PCIe root complex, but it stands to reason those that have multiple root complexes would follow the same pattern as the PPC devices.

Intel systems can (and often do) also have multiple PCIe root complexes, however for the sake of backwards compatibility, they all end up sharing the same configuration space access registers (0xCF8/0xCFC or memory mapped extended config space) and share a single unified bus/dev/func tree.

Note that the tree is not always contiguous. For example, I've seen one Intel board where there was a special PCIe device on bus 128. In the ACPI tables, there were two PCI "segments" described, the second of which corresponded to bus 128. There was no switch or bridge to connect bus 128 to the tree rooted at bus0/dev0/func0, so it would not be possible to automatically discover it by just walking the bus0/dev0/func0 tree and all its branches: you needed to use the ACPI hint to know it was there.

I have also seen cases like this with pre-PCIe systems. For example, I've seen a Dell server that had both 32-bit and 64-bit PCI buses, where the 64-bit bus was at bus 1, but was not directly bridged to bus 0 (the 32-bit bus). There was a reason for this: 64-bit PCIe buses are usually clocked at 66MHz, but will fall back to 33MHz if you connect a 32-bit PCI device to them (this is supported for backward compatibility). Reducing the bus clock reduces performance, so to avoid that it's necessary to keep the 32-bit and 64-bit buses separate and thus give each one its own host bridge. As with the previous example, all the devices shared the same bus/dev/func space, but the only way to learn about the other segment was to probe the ACPI tables.

It sounds as if the UEFI PCI host bridge code may be biased towards the Intel PCI behavior, though I'm not sure to what extent.

So the comment that you found that says:

// Most systems in the world including complex servers have only one Host Bridge.

Should probably be amended: it should probably say "Most Intel systems" and even those systems probably do have more than one host bridge (root complex), it's just that it doesn't look like it.

-Bill
 
> Thank you,
> Vladimir
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

--
=============================================================================
-Bill Paul            (510) 749-2329 | Senior Member of Technical Staff,
                 wpaul@windriver.com | Master of Unix-Fu - Wind River Systems =============================================================================
   "I put a dollar in a change machine. Nothing changed." - George Carlin =============================================================================
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-develOn Tuesday, May 30, 2017 Bill Paul wrote:
> Of all the gin joints in all the towns in all the world, Vladimir Olovyannikov had to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say:
> > > -----Original Message-----
> > > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> > > Sent: May-30-17 9:35 AM
> > > To: Vladimir Olovyannikov
> > > Cc: edk2-devel@lists.01.org
> > > Subject: Re: Using a generic PciHostBridgeDxe driver for a 
> > > multi-PCIe-domain platform
> > > 
> > > On 30 May 2017 at 16:23, Vladimir Olovyannikov
> > > 
> > > <vladimir.olovyannikov@broadcom.com> wrote:
> > > > Hi,
> > > > 
> > > > I've started PCIe stack implementation design for an armv8 aarch64 
> > > > platform.
> > > > The platform's PCIe represents several host bridges, and each 
> > > > hostbridge has one rootbridge.
> > > > They do not share any resources between each other.
> > > > Looking into the PciHostBridgeDxe implementation I can see that it 
> > > > supports only one hostbridge, and there is a comment:
> > > > // Most systems in the world including complex servers have only 
> > > > one Host Bridge.
> > > > 
> > > > So in my case should I create my own PciHostBridgeDxe driver 
> > > > supporting multiple hostbridges and do not use the Industry 
> > > > standard
> > > 
> > > driver?
> > > 
> > > > I am very new to it, and will appreciate any help or idea.
> > > 
> > > As far as I can tell, PciHostBridgeLib allows you to return an 
> > > arbitrary number of PCI host bridges, each with their own segment 
> > > number. I haven't tried it myself, but it is worth a try whether 
> > > returning an array of all host bridges on your platform works as 
> > > expected.
> > 
> > Thank you Ard,
> > Right, but PciHostBridgeDxe seems to work with one hostbridge.
> > I am confused that
> > 
> > // Make sure all root bridges share the same ResourceAssigned value
> > 
> > The rootbridges are independent on the platform, and should not share 
> > anything. Or am I missing anything?
> > Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib.
> 
> This may be an Intel-ism.
> 
> Note that for PCIe, I typically refer to "host bridges" as root
> complexes.
> 
> On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548,
> P2020, P4080, T4240) there are often several root complexes. A
> typical board design may have several PCIe slots where each slot is
> connected to one root complex in the SoC. Each root complex is
> therefore the parent of a separate "segment" which has its own
> unique bus/dev/func space. Each root complex has its own bank of
> registers to control it, including a separate set of configuration
> space access registers. This means you can have multiple PCIe trees
> each with its own bus0/dev0/func0 root. There can therefore be
> several devices with the same bus/dev/func tuple, but which reside
> on separate segments.
> 
> The ARMv8 board you're working with is probably set up the same
> way. I've only worked with ARM Cortex-A boards and those have all
> had just one PCIe root complex, but it stands to reason those that
> have multiple root complexes would follow the same pattern as the
> PPC devices.
> 
> Intel systems can (and often do) also have multiple PCIe root
> complexes, however for the sake of backwards compatibility, they all
> end up sharing the same configuration space access registers
> (0xCF8/0xCFC or memory mapped extended config space) and share a
> single unified bus/dev/func tree.
> 
> Note that the tree is not always contiguous. For example, I've seen
> one Intel board where there was a special PCIe device on bus 128. In
> the ACPI tables, there were two PCI "segments" described, the second
> of which corresponded to bus 128. There was no switch or bridge to
> connect bus 128 to the tree rooted at bus0/dev0/func0, so it would
> not be possible to automatically discover it by just walking the
> bus0/dev0/func0 tree and all its branches: you needed to use the
> ACPI hint to know it was there.
> 
> I have also seen cases like this with pre-PCIe systems. For example,
> I've seen a Dell server that had both 32-bit and 64-bit PCI buses,
> where the 64-bit bus was at bus 1, but was not directly bridged to
> bus 0 (the 32-bit bus). There was a reason for this: 64-bit PCIe
> buses are usually clocked at 66MHz, but will fall back to 33MHz if
> you connect a 32-bit PCI device to them (this is supported for
> backward compatibility). Reducing the bus clock reduces performance,
> so to avoid that it's necessary to keep the 32-bit and 64-bit buses
> separate and thus give each one its own host bridge. As with the
> previous example, all the devices shared the same bus/dev/func
> space, but the only way to learn about the other segment was to
> probe the ACPI tables.
> 
> It sounds as if the UEFI PCI host bridge code may be biased towards
> the Intel PCI behavior, though I'm not sure to what extent.
> 
> So the comment that you found that says:
> 
> // Most systems in the world including complex servers have only one Host Bridge.
> 
> Should probably be amended: it should probably say "Most Intel
> systems" and even those systems probably do have more than one host
> bridge (root complex), it's just that it doesn't look like it.
> 
> -Bill
>  

FWIW, SGI (now HPE) scalable x86 systems typcially implement at least
one PCIe segment per socket.  There can be dozens (even hundreds) of
sockets, so there are many root bridges.  Only segment zero is
accessible via the port 0xcf8/0xcfc mechanism.  The others are
memory-mapped.  We have implemented our own PciHostBridge module to
manage them.  It uses a single host bridge instance with many root
bridges linked under it.

Brian J. Johnson
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


  reply	other threads:[~2017-05-31 18:06 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-30 16:23 Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform Vladimir Olovyannikov
2017-05-30 16:35 ` Ard Biesheuvel
2017-05-30 16:49   ` Vladimir Olovyannikov
2017-05-30 17:34     ` Bill Paul
2017-05-30 18:11       ` Vladimir Olovyannikov
2017-05-31 15:02       ` Johnson, Brian (EXL - Eagan)
2017-05-31 18:07         ` Kinney, Michael D [this message]
2017-05-30 18:32 ` Laszlo Ersek
2017-05-30 18:50   ` Laszlo Ersek
2017-05-30 19:04     ` Vladimir Olovyannikov
2017-05-30 20:17       ` Laszlo Ersek
2017-05-30 20:23         ` Vladimir Olovyannikov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E92EE9817A31E24EB0585FDF735412F5A7D22F91@ORSMSX113.amr.corp.intel.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox