* Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
@ 2017-05-30 16:23 Vladimir Olovyannikov
2017-05-30 16:35 ` Ard Biesheuvel
2017-05-30 18:32 ` Laszlo Ersek
0 siblings, 2 replies; 12+ messages in thread
From: Vladimir Olovyannikov @ 2017-05-30 16:23 UTC (permalink / raw)
To: edk2-devel; +Cc: Ard Biesheuvel
Hi,
I've started PCIe stack implementation design for an armv8 aarch64
platform.
The platform's PCIe represents several host bridges, and each hostbridge
has one rootbridge.
They do not share any resources between each other.
Looking into the PciHostBridgeDxe implementation I can see that it
supports only one hostbridge, and there is a comment:
// Most systems in the world including complex servers have only one Host
Bridge.
So in my case should I create my own PciHostBridgeDxe driver supporting
multiple hostbridges and do not use the Industry standard driver?
I am very new to it, and will appreciate any help or idea.
Thank you,
Vladimir
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 16:23 Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform Vladimir Olovyannikov
@ 2017-05-30 16:35 ` Ard Biesheuvel
2017-05-30 16:49 ` Vladimir Olovyannikov
2017-05-30 18:32 ` Laszlo Ersek
1 sibling, 1 reply; 12+ messages in thread
From: Ard Biesheuvel @ 2017-05-30 16:35 UTC (permalink / raw)
To: Vladimir Olovyannikov; +Cc: edk2-devel@lists.01.org
On 30 May 2017 at 16:23, Vladimir Olovyannikov
<vladimir.olovyannikov@broadcom.com> wrote:
> Hi,
>
> I've started PCIe stack implementation design for an armv8 aarch64
> platform.
> The platform's PCIe represents several host bridges, and each hostbridge
> has one rootbridge.
> They do not share any resources between each other.
> Looking into the PciHostBridgeDxe implementation I can see that it
> supports only one hostbridge, and there is a comment:
> // Most systems in the world including complex servers have only one Host
> Bridge.
>
> So in my case should I create my own PciHostBridgeDxe driver supporting
> multiple hostbridges and do not use the Industry standard driver?
> I am very new to it, and will appreciate any help or idea.
>
As far as I can tell, PciHostBridgeLib allows you to return an
arbitrary number of PCI host bridges, each with their own segment
number. I haven't tried it myself, but it is worth a try whether
returning an array of all host bridges on your platform works as
expected.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 16:35 ` Ard Biesheuvel
@ 2017-05-30 16:49 ` Vladimir Olovyannikov
2017-05-30 17:34 ` Bill Paul
0 siblings, 1 reply; 12+ messages in thread
From: Vladimir Olovyannikov @ 2017-05-30 16:49 UTC (permalink / raw)
To: Ard Biesheuvel; +Cc: edk2-devel
> -----Original Message-----
> From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> Sent: May-30-17 9:35 AM
> To: Vladimir Olovyannikov
> Cc: edk2-devel@lists.01.org
> Subject: Re: Using a generic PciHostBridgeDxe driver for a
> multi-PCIe-domain
> platform
>
> On 30 May 2017 at 16:23, Vladimir Olovyannikov
> <vladimir.olovyannikov@broadcom.com> wrote:
> > Hi,
> >
> > I've started PCIe stack implementation design for an armv8 aarch64
> > platform.
> > The platform's PCIe represents several host bridges, and each
> > hostbridge has one rootbridge.
> > They do not share any resources between each other.
> > Looking into the PciHostBridgeDxe implementation I can see that it
> > supports only one hostbridge, and there is a comment:
> > // Most systems in the world including complex servers have only one
> > Host Bridge.
> >
> > So in my case should I create my own PciHostBridgeDxe driver
> > supporting multiple hostbridges and do not use the Industry standard
> driver?
> > I am very new to it, and will appreciate any help or idea.
> >
>
> As far as I can tell, PciHostBridgeLib allows you to return an arbitrary
> number
> of PCI host bridges, each with their own segment number. I haven't tried
> it
> myself, but it is worth a try whether returning an array of all host
> bridges on
> your platform works as expected.
Thank you Ard,
Right, but PciHostBridgeDxe seems to work with one hostbridge.
I am confused that
// Make sure all root bridges share the same ResourceAssigned value
The rootbridges are independent on the platform, and should not share
anything. Or am I missing anything?
Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib.
Thank you,
Vladimir
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 16:49 ` Vladimir Olovyannikov
@ 2017-05-30 17:34 ` Bill Paul
2017-05-30 18:11 ` Vladimir Olovyannikov
2017-05-31 15:02 ` Johnson, Brian (EXL - Eagan)
0 siblings, 2 replies; 12+ messages in thread
From: Bill Paul @ 2017-05-30 17:34 UTC (permalink / raw)
To: edk2-devel; +Cc: Vladimir Olovyannikov, Ard Biesheuvel, edk2-devel
Of all the gin joints in all the towns in all the world, Vladimir Olovyannikov
had to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say:
> > -----Original Message-----
> > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> > Sent: May-30-17 9:35 AM
> > To: Vladimir Olovyannikov
> > Cc: edk2-devel@lists.01.org
> > Subject: Re: Using a generic PciHostBridgeDxe driver for a
> > multi-PCIe-domain
> > platform
> >
> > On 30 May 2017 at 16:23, Vladimir Olovyannikov
> >
> > <vladimir.olovyannikov@broadcom.com> wrote:
> > > Hi,
> > >
> > > I've started PCIe stack implementation design for an armv8 aarch64
> > > platform.
> > > The platform's PCIe represents several host bridges, and each
> > > hostbridge has one rootbridge.
> > > They do not share any resources between each other.
> > > Looking into the PciHostBridgeDxe implementation I can see that it
> > > supports only one hostbridge, and there is a comment:
> > > // Most systems in the world including complex servers have only one
> > > Host Bridge.
> > >
> > > So in my case should I create my own PciHostBridgeDxe driver
> > > supporting multiple hostbridges and do not use the Industry standard
> >
> > driver?
> >
> > > I am very new to it, and will appreciate any help or idea.
> >
> > As far as I can tell, PciHostBridgeLib allows you to return an arbitrary
> > number
> > of PCI host bridges, each with their own segment number. I haven't tried
> > it
> > myself, but it is worth a try whether returning an array of all host
> > bridges on
> > your platform works as expected.
>
> Thank you Ard,
> Right, but PciHostBridgeDxe seems to work with one hostbridge.
> I am confused that
>
> // Make sure all root bridges share the same ResourceAssigned value
>
> The rootbridges are independent on the platform, and should not share
> anything. Or am I missing anything?
> Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib.
This may be an Intel-ism.
Note that for PCIe, I typically refer to "host bridges" as root complexes.
On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548, P2020,
P4080, T4240) there are often several root complexes. A typical board design
may have several PCIe slots where each slot is connected to one root complex
in the SoC. Each root complex is therefore the parent of a separate "segment"
which has its own unique bus/dev/func space. Each root complex has its own
bank of registers to control it, including a separate set of configuration
space access registers. This means you can have multiple PCIe trees each with
its own bus0/dev0/func0 root. There can therefore be several devices with the
same bus/dev/func tuple, but which reside on separate segments.
The ARMv8 board you're working with is probably set up the same way. I've only
worked with ARM Cortex-A boards and those have all had just one PCIe root
complex, but it stands to reason those that have multiple root complexes would
follow the same pattern as the PPC devices.
Intel systems can (and often do) also have multiple PCIe root complexes,
however for the sake of backwards compatibility, they all end up sharing the
same configuration space access registers (0xCF8/0xCFC or memory mapped
extended config space) and share a single unified bus/dev/func tree.
Note that the tree is not always contiguous. For example, I've seen one Intel
board where there was a special PCIe device on bus 128. In the ACPI tables,
there were two PCI "segments" described, the second of which corresponded to
bus 128. There was no switch or bridge to connect bus 128 to the tree rooted
at bus0/dev0/func0, so it would not be possible to automatically discover it
by just walking the bus0/dev0/func0 tree and all its branches: you needed to
use the ACPI hint to know it was there.
I have also seen cases like this with pre-PCIe systems. For example, I've seen
a Dell server that had both 32-bit and 64-bit PCI buses, where the 64-bit bus
was at bus 1, but was not directly bridged to bus 0 (the 32-bit bus). There
was a reason for this: 64-bit PCIe buses are usually clocked at 66MHz, but
will fall back to 33MHz if you connect a 32-bit PCI device to them (this is
supported for backward compatibility). Reducing the bus clock reduces
performance, so to avoid that it's necessary to keep the 32-bit and 64-bit
buses separate and thus give each one its own host bridge. As with the
previous example, all the devices shared the same bus/dev/func space, but the
only way to learn about the other segment was to probe the ACPI tables.
It sounds as if the UEFI PCI host bridge code may be biased towards the Intel
PCI behavior, though I'm not sure to what extent.
So the comment that you found that says:
// Most systems in the world including complex servers have only one Host
Bridge.
Should probably be amended: it should probably say "Most Intel systems" and
even those systems probably do have more than one host bridge (root complex),
it's just that it doesn't look like it.
-Bill
> Thank you,
> Vladimir
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
--
=============================================================================
-Bill Paul (510) 749-2329 | Senior Member of Technical Staff,
wpaul@windriver.com | Master of Unix-Fu - Wind River Systems
=============================================================================
"I put a dollar in a change machine. Nothing changed." - George Carlin
=============================================================================
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 17:34 ` Bill Paul
@ 2017-05-30 18:11 ` Vladimir Olovyannikov
2017-05-31 15:02 ` Johnson, Brian (EXL - Eagan)
1 sibling, 0 replies; 12+ messages in thread
From: Vladimir Olovyannikov @ 2017-05-30 18:11 UTC (permalink / raw)
To: Bill Paul, edk2-devel; +Cc: Ard Biesheuvel
> -----Original Message-----
> From: Bill Paul [mailto:wpaul@windriver.com]
> Sent: May-30-17 10:34 AM
> To: edk2-devel@ml01.01.org
> Cc: Vladimir Olovyannikov; Ard Biesheuvel; edk2-devel@ml01.01.org
> Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a
multi-PCIe-
> domain platform
>
> Of all the gin joints in all the towns in all the world, Vladimir
Olovyannikov had
> to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say:
>
> > > -----Original Message-----
> > > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> > > Sent: May-30-17 9:35 AM
> > > To: Vladimir Olovyannikov
> > > Cc: edk2-devel@lists.01.org
> > > Subject: Re: Using a generic PciHostBridgeDxe driver for a
> > > multi-PCIe-domain platform
> > >
> > > On 30 May 2017 at 16:23, Vladimir Olovyannikov
> > >
> > > <vladimir.olovyannikov@broadcom.com> wrote:
> > > > Hi,
> > > >
> > > > I've started PCIe stack implementation design for an armv8 aarch64
> > > > platform.
> > > > The platform's PCIe represents several host bridges, and each
> > > > hostbridge has one rootbridge.
> > > > They do not share any resources between each other.
> > > > Looking into the PciHostBridgeDxe implementation I can see that it
> > > > supports only one hostbridge, and there is a comment:
> > > > // Most systems in the world including complex servers have only
> > > > one Host Bridge.
> > > >
> > > > So in my case should I create my own PciHostBridgeDxe driver
> > > > supporting multiple hostbridges and do not use the Industry
> > > > standard
> > >
> > > driver?
> > >
> > > > I am very new to it, and will appreciate any help or idea.
> > >
> > > As far as I can tell, PciHostBridgeLib allows you to return an
> > > arbitrary number of PCI host bridges, each with their own segment
> > > number. I haven't tried it myself, but it is worth a try whether
> > > returning an array of all host bridges on your platform works as
> > > expected.
> >
> > Thank you Ard,
> > Right, but PciHostBridgeDxe seems to work with one hostbridge.
> > I am confused that
> >
> > // Make sure all root bridges share the same ResourceAssigned value
> >
> > The rootbridges are independent on the platform, and should not share
> > anything. Or am I missing anything?
> > Anyway, I will try to return multiple hostbridges in the
PciHostBridgeLib.
>
> This may be an Intel-ism.
>
> Note that for PCIe, I typically refer to "host bridges" as root
complexes.
>
> On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548,
> P2020, P4080, T4240) there are often several root complexes. A typical
board
> design may have several PCIe slots where each slot is connected to one
root
> complex in the SoC. Each root complex is therefore the parent of a
separate
> "segment"
> which has its own unique bus/dev/func space. Each root complex has its
own
> bank of registers to control it, including a separate set of
configuration space
> access registers. This means you can have multiple PCIe trees each with
its
> own bus0/dev0/func0 root. There can therefore be several devices with
the
> same bus/dev/func tuple, but which reside on separate segments.
>
> The ARMv8 board you're working with is probably set up the same way.
I've
> only worked with ARM Cortex-A boards and those have all had just one
PCIe
> root complex, but it stands to reason those that have multiple root
> complexes would follow the same pattern as the PPC devices.
>
> Intel systems can (and often do) also have multiple PCIe root complexes,
> however for the sake of backwards compatibility, they all end up sharing
the
> same configuration space access registers (0xCF8/0xCFC or memory mapped
> extended config space) and share a single unified bus/dev/func tree.
I see. Thanks for comprehensive explanation.
In my case root complexes do not share (there is no need for backward
compatibility).
So the question is - can I use PciRootBridgeDxe from MdeModulePkg, which
operates with "rootbridges" and one rootcomplex(?),
or I need to look into the way of creating my own for the platform (say
the way Juno driver was designed initially before switching to the generic
one)?
>
> Note that the tree is not always contiguous. For example, I've seen one
Intel
> board where there was a special PCIe device on bus 128. In the ACPI
tables,
> there were two PCI "segments" described, the second of which
> corresponded to bus 128. There was no switch or bridge to connect bus
128
> to the tree rooted at bus0/dev0/func0, so it would not be possible to
> automatically discover it by just walking the bus0/dev0/func0 tree and
all its
> branches: you needed to use the ACPI hint to know it was there.
>
> I have also seen cases like this with pre-PCIe systems. For example,
I've seen
> a Dell server that had both 32-bit and 64-bit PCI buses, where the
64-bit bus
> was at bus 1, but was not directly bridged to bus 0 (the 32-bit bus).
There was
> a reason for this: 64-bit PCIe buses are usually clocked at 66MHz, but
will fall
> back to 33MHz if you connect a 32-bit PCI device to them (this is
supported
> for backward compatibility). Reducing the bus clock reduces performance,
so
> to avoid that it's necessary to keep the 32-bit and 64-bit buses
separate and
> thus give each one its own host bridge. As with the previous example,
all the
> devices shared the same bus/dev/func space, but the only way to learn
> about the other segment was to probe the ACPI tables.
>
> It sounds as if the UEFI PCI host bridge code may be biased towards the
Intel
> PCI behavior, though I'm not sure to what extent.
>
> So the comment that you found that says:
>
> // Most systems in the world including complex servers have only one
Host
> Bridge.
>
> Should probably be amended: it should probably say "Most Intel systems"
> and even those systems probably do have more than one host bridge (root
> complex), it's just that it doesn't look like it.
>
> -Bill
>
> > Thank you,
> > Vladimir
> > _______________________________________________
> > edk2-devel mailing list
> > edk2-devel@lists.01.org
> > https://lists.01.org/mailman/listinfo/edk2-devel
>
> --
> ==========================================================
> ===================
> -Bill Paul (510) 749-2329 | Senior Member of Technical Staff,
> wpaul@windriver.com | Master of Unix-Fu - Wind River
Systems
> ==========================================================
> ===================
> "I put a dollar in a change machine. Nothing changed." - George
Carlin
> ==========================================================
> ===================
Thank you,
Vladimir
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 16:23 Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform Vladimir Olovyannikov
2017-05-30 16:35 ` Ard Biesheuvel
@ 2017-05-30 18:32 ` Laszlo Ersek
2017-05-30 18:50 ` Laszlo Ersek
1 sibling, 1 reply; 12+ messages in thread
From: Laszlo Ersek @ 2017-05-30 18:32 UTC (permalink / raw)
To: Vladimir Olovyannikov, edk2-devel; +Cc: Ard Biesheuvel
On 05/30/17 18:23, Vladimir Olovyannikov wrote:
> Hi,
>
> I've started PCIe stack implementation design for an armv8 aarch64
> platform.
> The platform's PCIe represents several host bridges, and each hostbridge
> has one rootbridge.
> They do not share any resources between each other.
> Looking into the PciHostBridgeDxe implementation I can see that it
> supports only one hostbridge, and there is a comment:
> // Most systems in the world including complex servers have only one Host
> Bridge.
>
> So in my case should I create my own PciHostBridgeDxe driver supporting
> multiple hostbridges and do not use the Industry standard driver?
> I am very new to it, and will appreciate any help or idea.
I think you can use PciHostBridgeDxe on this platform:
- Implement a PciHostBridgeLib instance (see
<MdeModulePkg/Include/Library/PciHostBridgeLib.h>) that returns
PCI_ROOT_BRIDGE objects with different Segment fields, from
PciHostBridgeGetRootBridges().
- Implement a PciSegmentLib instance (see
<MdePkg/Include/Library/PciSegmentLib.h>) that routes the config space
addresses, encoded by the PCI_SEGMENT_LIB_ADDRESS() macro, according to
your platform.
PciHostBridgeDxe and PciBusDxe should "just work" atop. To my
understanding, PciBusDxe delegates all config space accesses to
PciHostBridgeDxe, via EFI_PCI_ROOT_BRIDGE_IO_PROTOCOL. And
PciHostBridgeDxe delegates all config space accesses to the platform's
PciSegmentLib.
Thanks,
Laszlo
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 18:32 ` Laszlo Ersek
@ 2017-05-30 18:50 ` Laszlo Ersek
2017-05-30 19:04 ` Vladimir Olovyannikov
0 siblings, 1 reply; 12+ messages in thread
From: Laszlo Ersek @ 2017-05-30 18:50 UTC (permalink / raw)
To: Vladimir Olovyannikov, edk2-devel; +Cc: Ard Biesheuvel
On 05/30/17 20:32, Laszlo Ersek wrote:
> On 05/30/17 18:23, Vladimir Olovyannikov wrote:
>> Hi,
>>
>> I've started PCIe stack implementation design for an armv8 aarch64
>> platform.
>> The platform's PCIe represents several host bridges, and each hostbridge
>> has one rootbridge.
>> They do not share any resources between each other.
>> Looking into the PciHostBridgeDxe implementation I can see that it
>> supports only one hostbridge, and there is a comment:
>> // Most systems in the world including complex servers have only one Host
>> Bridge.
>>
>> So in my case should I create my own PciHostBridgeDxe driver supporting
>> multiple hostbridges and do not use the Industry standard driver?
>> I am very new to it, and will appreciate any help or idea.
>
> I think you can use PciHostBridgeDxe on this platform:
>
> - Implement a PciHostBridgeLib instance (see
> <MdeModulePkg/Include/Library/PciHostBridgeLib.h>) that returns
> PCI_ROOT_BRIDGE objects with different Segment fields, from
> PciHostBridgeGetRootBridges().
>
> - Implement a PciSegmentLib instance (see
> <MdePkg/Include/Library/PciSegmentLib.h>) that routes the config space
> addresses, encoded by the PCI_SEGMENT_LIB_ADDRESS() macro, according to
> your platform.
>
> PciHostBridgeDxe and PciBusDxe should "just work" atop. To my
> understanding, PciBusDxe delegates all config space accesses to
> PciHostBridgeDxe, via EFI_PCI_ROOT_BRIDGE_IO_PROTOCOL. And
> PciHostBridgeDxe delegates all config space accesses to the platform's
> PciSegmentLib.
A small addition. Assuming the general case, i.e. when you have a
different number of root bridges on each of several host bridges, you
still have to number all those root bridges incrementally, in a curious,
flat address space. And that address space is the PcieRoot(N) device
path node that is supposed to start the "DevicePath" member of each
PCI_ROOT_BRIDGE object that you return from PciHostBridgeGetRootBridges().
You can read about the PcieRoot() devpath node in the UEFI 2.6 spec.
Basically, you have
ACPI_HID_DEVICE_PATH.Header = <fill in as usual>;
ACPI_HID_DEVICE_PATH.HID = EFI_PNP_ID (0x0a08);
ACPI_HID_DEVICE_PATH.UID = <fill in incrementally across host bridges>;
The UID values used in these devpath nodes should preferably match the
UID values of the corresponding PCI Express Root Bridge objects
(=PNP0A08) that you expose in your ACPI tables (DSDT and/or SSDT).
Thanks
Laszlo
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 18:50 ` Laszlo Ersek
@ 2017-05-30 19:04 ` Vladimir Olovyannikov
2017-05-30 20:17 ` Laszlo Ersek
0 siblings, 1 reply; 12+ messages in thread
From: Vladimir Olovyannikov @ 2017-05-30 19:04 UTC (permalink / raw)
To: Laszlo Ersek, edk2-devel; +Cc: Ard Biesheuvel
> -----Original Message-----
> From: Laszlo Ersek [mailto:lersek@redhat.com]
> Sent: May-30-17 11:50 AM
> To: Vladimir Olovyannikov; edk2-devel@lists.01.org
> Cc: Ard Biesheuvel
> Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a
> multi-PCIe-
> domain platform
>
> On 05/30/17 20:32, Laszlo Ersek wrote:
> > On 05/30/17 18:23, Vladimir Olovyannikov wrote:
> >> Hi,
> >>
> >> I've started PCIe stack implementation design for an armv8 aarch64
> >> platform.
> >> The platform's PCIe represents several host bridges, and each
> >> hostbridge has one rootbridge.
> >> They do not share any resources between each other.
> >> Looking into the PciHostBridgeDxe implementation I can see that it
> >> supports only one hostbridge, and there is a comment:
> >> // Most systems in the world including complex servers have only one
> >> Host Bridge.
> >>
> >> So in my case should I create my own PciHostBridgeDxe driver
> >> supporting multiple hostbridges and do not use the Industry standard
> driver?
> >> I am very new to it, and will appreciate any help or idea.
> >
> > I think you can use PciHostBridgeDxe on this platform:
> >
> > - Implement a PciHostBridgeLib instance (see
> > <MdeModulePkg/Include/Library/PciHostBridgeLib.h>) that returns
> > PCI_ROOT_BRIDGE objects with different Segment fields, from
> > PciHostBridgeGetRootBridges().
> >
> > - Implement a PciSegmentLib instance (see
> > <MdePkg/Include/Library/PciSegmentLib.h>) that routes the config space
> > addresses, encoded by the PCI_SEGMENT_LIB_ADDRESS() macro,
> according
> > to your platform.
> >
> > PciHostBridgeDxe and PciBusDxe should "just work" atop. To my
> > understanding, PciBusDxe delegates all config space accesses to
> > PciHostBridgeDxe, via EFI_PCI_ROOT_BRIDGE_IO_PROTOCOL. And
> > PciHostBridgeDxe delegates all config space accesses to the platform's
> > PciSegmentLib.
>
> A small addition. Assuming the general case, i.e. when you have a
> different
> number of root bridges on each of several host bridges, you still have to
> number all those root bridges incrementally, in a curious, flat address
> space.
> And that address space is the PcieRoot(N) device path node that is
> supposed
> to start the "DevicePath" member of each PCI_ROOT_BRIDGE object that
> you return from PciHostBridgeGetRootBridges().
>
> You can read about the PcieRoot() devpath node in the UEFI 2.6 spec.
> Basically, you have
>
> ACPI_HID_DEVICE_PATH.Header = <fill in as usual>;
> ACPI_HID_DEVICE_PATH.HID = EFI_PNP_ID (0x0a08);
> ACPI_HID_DEVICE_PATH.UID = <fill in incrementally across host bridges>;
>
> The UID values used in these devpath nodes should preferably match the
> UID values of the corresponding PCI Express Root Bridge objects
> (=PNP0A08) that you expose in your ACPI tables (DSDT and/or SSDT).
Thanks for explanation Laszlo,
In my case every hostbridge has exactly one rootbridge. I will follow your
advice and will create
SegmentPciLib and platform's PciHostBridgeLib.
So basically my PciHostBridgeLib should treat hostbridges as rootbridges
with different segments in this case?
>
> Thanks
> Laszlo
Thank you,
Vladimir
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 19:04 ` Vladimir Olovyannikov
@ 2017-05-30 20:17 ` Laszlo Ersek
2017-05-30 20:23 ` Vladimir Olovyannikov
0 siblings, 1 reply; 12+ messages in thread
From: Laszlo Ersek @ 2017-05-30 20:17 UTC (permalink / raw)
To: Vladimir Olovyannikov, edk2-devel; +Cc: Ard Biesheuvel
On 05/30/17 21:04, Vladimir Olovyannikov wrote:
> So basically my PciHostBridgeLib should treat hostbridges as
> rootbridges with different segments in this case?
In my opinion, yes.
I would actually put it as
treat the set of sole root bridges on all the host bridges as multiple
root bridges on a single host bridge, just with different segment
numbers
The separate segment numbers should be mapped to the separate ECAM
windows in your PciSegmentLib instance, in my opinion.
Thanks,
Laszlo
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 20:17 ` Laszlo Ersek
@ 2017-05-30 20:23 ` Vladimir Olovyannikov
0 siblings, 0 replies; 12+ messages in thread
From: Vladimir Olovyannikov @ 2017-05-30 20:23 UTC (permalink / raw)
To: Laszlo Ersek, edk2-devel; +Cc: Ard Biesheuvel
> -----Original Message-----
> From: Laszlo Ersek [mailto:lersek@redhat.com]
> Sent: May-30-17 1:17 PM
> To: Vladimir Olovyannikov; edk2-devel@lists.01.org
> Cc: Ard Biesheuvel
> Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a
> multi-PCIe-
> domain platform
>
> On 05/30/17 21:04, Vladimir Olovyannikov wrote:
>
> > So basically my PciHostBridgeLib should treat hostbridges as
> > rootbridges with different segments in this case?
>
> In my opinion, yes.
>
> I would actually put it as
>
> treat the set of sole root bridges on all the host bridges as multiple
> root bridges on a single host bridge, just with different segment
> numbers
>
> The separate segment numbers should be mapped to the separate ECAM
> windows in your PciSegmentLib instance, in my opinion.
OK, got it. Thanks a lot for your help!
>
> Thanks,
> Laszlo
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-30 17:34 ` Bill Paul
2017-05-30 18:11 ` Vladimir Olovyannikov
@ 2017-05-31 15:02 ` Johnson, Brian (EXL - Eagan)
2017-05-31 18:07 ` Kinney, Michael D
1 sibling, 1 reply; 12+ messages in thread
From: Johnson, Brian (EXL - Eagan) @ 2017-05-31 15:02 UTC (permalink / raw)
To: Bill Paul, edk2-devel@ml01.01.org
Cc: edk2-devel@ml01.01.org, Vladimir Olovyannikov, Ard Biesheuvel
-----Original Message-----
From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Bill Paul
Sent: Tuesday, May 30, 2017 12:34 PM
To: edk2-devel@ml01.01.org
Cc: edk2-devel@ml01.01.org; Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>; Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
Of all the gin joints in all the towns in all the world, Vladimir Olovyannikov had to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say:
> > -----Original Message-----
> > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> > Sent: May-30-17 9:35 AM
> > To: Vladimir Olovyannikov
> > Cc: edk2-devel@lists.01.org
> > Subject: Re: Using a generic PciHostBridgeDxe driver for a
> > multi-PCIe-domain platform
> >
> > On 30 May 2017 at 16:23, Vladimir Olovyannikov
> >
> > <vladimir.olovyannikov@broadcom.com> wrote:
> > > Hi,
> > >
> > > I've started PCIe stack implementation design for an armv8 aarch64
> > > platform.
> > > The platform's PCIe represents several host bridges, and each
> > > hostbridge has one rootbridge.
> > > They do not share any resources between each other.
> > > Looking into the PciHostBridgeDxe implementation I can see that it
> > > supports only one hostbridge, and there is a comment:
> > > // Most systems in the world including complex servers have only
> > > one Host Bridge.
> > >
> > > So in my case should I create my own PciHostBridgeDxe driver
> > > supporting multiple hostbridges and do not use the Industry
> > > standard
> >
> > driver?
> >
> > > I am very new to it, and will appreciate any help or idea.
> >
> > As far as I can tell, PciHostBridgeLib allows you to return an
> > arbitrary number of PCI host bridges, each with their own segment
> > number. I haven't tried it myself, but it is worth a try whether
> > returning an array of all host bridges on your platform works as
> > expected.
>
> Thank you Ard,
> Right, but PciHostBridgeDxe seems to work with one hostbridge.
> I am confused that
>
> // Make sure all root bridges share the same ResourceAssigned value
>
> The rootbridges are independent on the platform, and should not share
> anything. Or am I missing anything?
> Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib.
This may be an Intel-ism.
Note that for PCIe, I typically refer to "host bridges" as root complexes.
On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548, P2020, P4080, T4240) there are often several root complexes. A typical board design may have several PCIe slots where each slot is connected to one root complex in the SoC. Each root complex is therefore the parent of a separate "segment"
which has its own unique bus/dev/func space. Each root complex has its own bank of registers to control it, including a separate set of configuration space access registers. This means you can have multiple PCIe trees each with its own bus0/dev0/func0 root. There can therefore be several devices with the same bus/dev/func tuple, but which reside on separate segments.
The ARMv8 board you're working with is probably set up the same way. I've only worked with ARM Cortex-A boards and those have all had just one PCIe root complex, but it stands to reason those that have multiple root complexes would follow the same pattern as the PPC devices.
Intel systems can (and often do) also have multiple PCIe root complexes, however for the sake of backwards compatibility, they all end up sharing the same configuration space access registers (0xCF8/0xCFC or memory mapped extended config space) and share a single unified bus/dev/func tree.
Note that the tree is not always contiguous. For example, I've seen one Intel board where there was a special PCIe device on bus 128. In the ACPI tables, there were two PCI "segments" described, the second of which corresponded to bus 128. There was no switch or bridge to connect bus 128 to the tree rooted at bus0/dev0/func0, so it would not be possible to automatically discover it by just walking the bus0/dev0/func0 tree and all its branches: you needed to use the ACPI hint to know it was there.
I have also seen cases like this with pre-PCIe systems. For example, I've seen a Dell server that had both 32-bit and 64-bit PCI buses, where the 64-bit bus was at bus 1, but was not directly bridged to bus 0 (the 32-bit bus). There was a reason for this: 64-bit PCIe buses are usually clocked at 66MHz, but will fall back to 33MHz if you connect a 32-bit PCI device to them (this is supported for backward compatibility). Reducing the bus clock reduces performance, so to avoid that it's necessary to keep the 32-bit and 64-bit buses separate and thus give each one its own host bridge. As with the previous example, all the devices shared the same bus/dev/func space, but the only way to learn about the other segment was to probe the ACPI tables.
It sounds as if the UEFI PCI host bridge code may be biased towards the Intel PCI behavior, though I'm not sure to what extent.
So the comment that you found that says:
// Most systems in the world including complex servers have only one Host Bridge.
Should probably be amended: it should probably say "Most Intel systems" and even those systems probably do have more than one host bridge (root complex), it's just that it doesn't look like it.
-Bill
> Thank you,
> Vladimir
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
--
=============================================================================
-Bill Paul (510) 749-2329 | Senior Member of Technical Staff,
wpaul@windriver.com | Master of Unix-Fu - Wind River Systems =============================================================================
"I put a dollar in a change machine. Nothing changed." - George Carlin =============================================================================
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-develOn Tuesday, May 30, 2017 Bill Paul wrote:
> Of all the gin joints in all the towns in all the world, Vladimir Olovyannikov had to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say:
> > > -----Original Message-----
> > > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> > > Sent: May-30-17 9:35 AM
> > > To: Vladimir Olovyannikov
> > > Cc: edk2-devel@lists.01.org
> > > Subject: Re: Using a generic PciHostBridgeDxe driver for a
> > > multi-PCIe-domain platform
> > >
> > > On 30 May 2017 at 16:23, Vladimir Olovyannikov
> > >
> > > <vladimir.olovyannikov@broadcom.com> wrote:
> > > > Hi,
> > > >
> > > > I've started PCIe stack implementation design for an armv8 aarch64
> > > > platform.
> > > > The platform's PCIe represents several host bridges, and each
> > > > hostbridge has one rootbridge.
> > > > They do not share any resources between each other.
> > > > Looking into the PciHostBridgeDxe implementation I can see that it
> > > > supports only one hostbridge, and there is a comment:
> > > > // Most systems in the world including complex servers have only
> > > > one Host Bridge.
> > > >
> > > > So in my case should I create my own PciHostBridgeDxe driver
> > > > supporting multiple hostbridges and do not use the Industry
> > > > standard
> > >
> > > driver?
> > >
> > > > I am very new to it, and will appreciate any help or idea.
> > >
> > > As far as I can tell, PciHostBridgeLib allows you to return an
> > > arbitrary number of PCI host bridges, each with their own segment
> > > number. I haven't tried it myself, but it is worth a try whether
> > > returning an array of all host bridges on your platform works as
> > > expected.
> >
> > Thank you Ard,
> > Right, but PciHostBridgeDxe seems to work with one hostbridge.
> > I am confused that
> >
> > // Make sure all root bridges share the same ResourceAssigned value
> >
> > The rootbridges are independent on the platform, and should not share
> > anything. Or am I missing anything?
> > Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib.
>
> This may be an Intel-ism.
>
> Note that for PCIe, I typically refer to "host bridges" as root
> complexes.
>
> On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548,
> P2020, P4080, T4240) there are often several root complexes. A
> typical board design may have several PCIe slots where each slot is
> connected to one root complex in the SoC. Each root complex is
> therefore the parent of a separate "segment" which has its own
> unique bus/dev/func space. Each root complex has its own bank of
> registers to control it, including a separate set of configuration
> space access registers. This means you can have multiple PCIe trees
> each with its own bus0/dev0/func0 root. There can therefore be
> several devices with the same bus/dev/func tuple, but which reside
> on separate segments.
>
> The ARMv8 board you're working with is probably set up the same
> way. I've only worked with ARM Cortex-A boards and those have all
> had just one PCIe root complex, but it stands to reason those that
> have multiple root complexes would follow the same pattern as the
> PPC devices.
>
> Intel systems can (and often do) also have multiple PCIe root
> complexes, however for the sake of backwards compatibility, they all
> end up sharing the same configuration space access registers
> (0xCF8/0xCFC or memory mapped extended config space) and share a
> single unified bus/dev/func tree.
>
> Note that the tree is not always contiguous. For example, I've seen
> one Intel board where there was a special PCIe device on bus 128. In
> the ACPI tables, there were two PCI "segments" described, the second
> of which corresponded to bus 128. There was no switch or bridge to
> connect bus 128 to the tree rooted at bus0/dev0/func0, so it would
> not be possible to automatically discover it by just walking the
> bus0/dev0/func0 tree and all its branches: you needed to use the
> ACPI hint to know it was there.
>
> I have also seen cases like this with pre-PCIe systems. For example,
> I've seen a Dell server that had both 32-bit and 64-bit PCI buses,
> where the 64-bit bus was at bus 1, but was not directly bridged to
> bus 0 (the 32-bit bus). There was a reason for this: 64-bit PCIe
> buses are usually clocked at 66MHz, but will fall back to 33MHz if
> you connect a 32-bit PCI device to them (this is supported for
> backward compatibility). Reducing the bus clock reduces performance,
> so to avoid that it's necessary to keep the 32-bit and 64-bit buses
> separate and thus give each one its own host bridge. As with the
> previous example, all the devices shared the same bus/dev/func
> space, but the only way to learn about the other segment was to
> probe the ACPI tables.
>
> It sounds as if the UEFI PCI host bridge code may be biased towards
> the Intel PCI behavior, though I'm not sure to what extent.
>
> So the comment that you found that says:
>
> // Most systems in the world including complex servers have only one Host Bridge.
>
> Should probably be amended: it should probably say "Most Intel
> systems" and even those systems probably do have more than one host
> bridge (root complex), it's just that it doesn't look like it.
>
> -Bill
>
FWIW, SGI (now HPE) scalable x86 systems typcially implement at least
one PCIe segment per socket. There can be dozens (even hundreds) of
sockets, so there are many root bridges. Only segment zero is
accessible via the port 0xcf8/0xcfc mechanism. The others are
memory-mapped. We have implemented our own PciHostBridge module to
manage them. It uses a single host bridge instance with many root
bridges linked under it.
Brian J. Johnson
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
2017-05-31 15:02 ` Johnson, Brian (EXL - Eagan)
@ 2017-05-31 18:07 ` Kinney, Michael D
0 siblings, 0 replies; 12+ messages in thread
From: Kinney, Michael D @ 2017-05-31 18:07 UTC (permalink / raw)
To: Johnson, Brian (EXL - Eagan), Bill Paul, edk2-devel@ml01.01.org,
Kinney, Michael D
Cc: edk2-devel@ml01.01.org, Vladimir Olovyannikov, Ard Biesheuvel
There may also be some PCI terminology differences.
The EDK II implements the PI Specifications. Latest PI version:
http://www.uefi.org/sites/default/files/resources/PI%201.5.zip
Volume 5 - Chapter 10 PCI Host Bridge provides the terminology that is used
for the EDK II implementation.
Specifically, Section 10.4 provides a discussion of the following example
PCI Architectures:
* Desktop system with 1 PCI root bridge
* Server system with 4 PCI root bridges
* Server system with 2 PCI segments
* Server system with 2 PCI host buses
One additional element I have noticed, is the difference between a HW
description of the platform's PCI subsystem, and the SW view of the
platform's PCI subsystem.
Mike
-----Original Message-----
From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Johnson, Brian (EXL - Eagan)
Sent: Wednesday, May 31, 2017 8:02 AM
To: Bill Paul <wpaul@windriver.com>; edk2-devel@ml01.01.org
Cc: edk2-devel@ml01.01.org; Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>; Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
-----Original Message-----
From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Bill Paul
Sent: Tuesday, May 30, 2017 12:34 PM
To: edk2-devel@ml01.01.org
Cc: edk2-devel@ml01.01.org; Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>; Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
Of all the gin joints in all the towns in all the world, Vladimir Olovyannikov had to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say:
> > -----Original Message-----
> > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> > Sent: May-30-17 9:35 AM
> > To: Vladimir Olovyannikov
> > Cc: edk2-devel@lists.01.org
> > Subject: Re: Using a generic PciHostBridgeDxe driver for a
> > multi-PCIe-domain platform
> >
> > On 30 May 2017 at 16:23, Vladimir Olovyannikov
> >
> > <vladimir.olovyannikov@broadcom.com> wrote:
> > > Hi,
> > >
> > > I've started PCIe stack implementation design for an armv8 aarch64
> > > platform.
> > > The platform's PCIe represents several host bridges, and each
> > > hostbridge has one rootbridge.
> > > They do not share any resources between each other.
> > > Looking into the PciHostBridgeDxe implementation I can see that it
> > > supports only one hostbridge, and there is a comment:
> > > // Most systems in the world including complex servers have only
> > > one Host Bridge.
> > >
> > > So in my case should I create my own PciHostBridgeDxe driver
> > > supporting multiple hostbridges and do not use the Industry
> > > standard
> >
> > driver?
> >
> > > I am very new to it, and will appreciate any help or idea.
> >
> > As far as I can tell, PciHostBridgeLib allows you to return an
> > arbitrary number of PCI host bridges, each with their own segment
> > number. I haven't tried it myself, but it is worth a try whether
> > returning an array of all host bridges on your platform works as
> > expected.
>
> Thank you Ard,
> Right, but PciHostBridgeDxe seems to work with one hostbridge.
> I am confused that
>
> // Make sure all root bridges share the same ResourceAssigned value
>
> The rootbridges are independent on the platform, and should not share
> anything. Or am I missing anything?
> Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib.
This may be an Intel-ism.
Note that for PCIe, I typically refer to "host bridges" as root complexes.
On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548, P2020, P4080, T4240) there are often several root complexes. A typical board design may have several PCIe slots where each slot is connected to one root complex in the SoC. Each root complex is therefore the parent of a separate "segment"
which has its own unique bus/dev/func space. Each root complex has its own bank of registers to control it, including a separate set of configuration space access registers. This means you can have multiple PCIe trees each with its own bus0/dev0/func0 root. There can therefore be several devices with the same bus/dev/func tuple, but which reside on separate segments.
The ARMv8 board you're working with is probably set up the same way. I've only worked with ARM Cortex-A boards and those have all had just one PCIe root complex, but it stands to reason those that have multiple root complexes would follow the same pattern as the PPC devices.
Intel systems can (and often do) also have multiple PCIe root complexes, however for the sake of backwards compatibility, they all end up sharing the same configuration space access registers (0xCF8/0xCFC or memory mapped extended config space) and share a single unified bus/dev/func tree.
Note that the tree is not always contiguous. For example, I've seen one Intel board where there was a special PCIe device on bus 128. In the ACPI tables, there were two PCI "segments" described, the second of which corresponded to bus 128. There was no switch or bridge to connect bus 128 to the tree rooted at bus0/dev0/func0, so it would not be possible to automatically discover it by just walking the bus0/dev0/func0 tree and all its branches: you needed to use the ACPI hint to know it was there.
I have also seen cases like this with pre-PCIe systems. For example, I've seen a Dell server that had both 32-bit and 64-bit PCI buses, where the 64-bit bus was at bus 1, but was not directly bridged to bus 0 (the 32-bit bus). There was a reason for this: 64-bit PCIe buses are usually clocked at 66MHz, but will fall back to 33MHz if you connect a 32-bit PCI device to them (this is supported for backward compatibility). Reducing the bus clock reduces performance, so to avoid that it's necessary to keep the 32-bit and 64-bit buses separate and thus give each one its own host bridge. As with the previous example, all the devices shared the same bus/dev/func space, but the only way to learn about the other segment was to probe the ACPI tables.
It sounds as if the UEFI PCI host bridge code may be biased towards the Intel PCI behavior, though I'm not sure to what extent.
So the comment that you found that says:
// Most systems in the world including complex servers have only one Host Bridge.
Should probably be amended: it should probably say "Most Intel systems" and even those systems probably do have more than one host bridge (root complex), it's just that it doesn't look like it.
-Bill
> Thank you,
> Vladimir
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
--
=============================================================================
-Bill Paul (510) 749-2329 | Senior Member of Technical Staff,
wpaul@windriver.com | Master of Unix-Fu - Wind River Systems =============================================================================
"I put a dollar in a change machine. Nothing changed." - George Carlin =============================================================================
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-develOn Tuesday, May 30, 2017 Bill Paul wrote:
> Of all the gin joints in all the towns in all the world, Vladimir Olovyannikov had to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say:
> > > -----Original Message-----
> > > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> > > Sent: May-30-17 9:35 AM
> > > To: Vladimir Olovyannikov
> > > Cc: edk2-devel@lists.01.org
> > > Subject: Re: Using a generic PciHostBridgeDxe driver for a
> > > multi-PCIe-domain platform
> > >
> > > On 30 May 2017 at 16:23, Vladimir Olovyannikov
> > >
> > > <vladimir.olovyannikov@broadcom.com> wrote:
> > > > Hi,
> > > >
> > > > I've started PCIe stack implementation design for an armv8 aarch64
> > > > platform.
> > > > The platform's PCIe represents several host bridges, and each
> > > > hostbridge has one rootbridge.
> > > > They do not share any resources between each other.
> > > > Looking into the PciHostBridgeDxe implementation I can see that it
> > > > supports only one hostbridge, and there is a comment:
> > > > // Most systems in the world including complex servers have only
> > > > one Host Bridge.
> > > >
> > > > So in my case should I create my own PciHostBridgeDxe driver
> > > > supporting multiple hostbridges and do not use the Industry
> > > > standard
> > >
> > > driver?
> > >
> > > > I am very new to it, and will appreciate any help or idea.
> > >
> > > As far as I can tell, PciHostBridgeLib allows you to return an
> > > arbitrary number of PCI host bridges, each with their own segment
> > > number. I haven't tried it myself, but it is worth a try whether
> > > returning an array of all host bridges on your platform works as
> > > expected.
> >
> > Thank you Ard,
> > Right, but PciHostBridgeDxe seems to work with one hostbridge.
> > I am confused that
> >
> > // Make sure all root bridges share the same ResourceAssigned value
> >
> > The rootbridges are independent on the platform, and should not share
> > anything. Or am I missing anything?
> > Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib.
>
> This may be an Intel-ism.
>
> Note that for PCIe, I typically refer to "host bridges" as root
> complexes.
>
> On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548,
> P2020, P4080, T4240) there are often several root complexes. A
> typical board design may have several PCIe slots where each slot is
> connected to one root complex in the SoC. Each root complex is
> therefore the parent of a separate "segment" which has its own
> unique bus/dev/func space. Each root complex has its own bank of
> registers to control it, including a separate set of configuration
> space access registers. This means you can have multiple PCIe trees
> each with its own bus0/dev0/func0 root. There can therefore be
> several devices with the same bus/dev/func tuple, but which reside
> on separate segments.
>
> The ARMv8 board you're working with is probably set up the same
> way. I've only worked with ARM Cortex-A boards and those have all
> had just one PCIe root complex, but it stands to reason those that
> have multiple root complexes would follow the same pattern as the
> PPC devices.
>
> Intel systems can (and often do) also have multiple PCIe root
> complexes, however for the sake of backwards compatibility, they all
> end up sharing the same configuration space access registers
> (0xCF8/0xCFC or memory mapped extended config space) and share a
> single unified bus/dev/func tree.
>
> Note that the tree is not always contiguous. For example, I've seen
> one Intel board where there was a special PCIe device on bus 128. In
> the ACPI tables, there were two PCI "segments" described, the second
> of which corresponded to bus 128. There was no switch or bridge to
> connect bus 128 to the tree rooted at bus0/dev0/func0, so it would
> not be possible to automatically discover it by just walking the
> bus0/dev0/func0 tree and all its branches: you needed to use the
> ACPI hint to know it was there.
>
> I have also seen cases like this with pre-PCIe systems. For example,
> I've seen a Dell server that had both 32-bit and 64-bit PCI buses,
> where the 64-bit bus was at bus 1, but was not directly bridged to
> bus 0 (the 32-bit bus). There was a reason for this: 64-bit PCIe
> buses are usually clocked at 66MHz, but will fall back to 33MHz if
> you connect a 32-bit PCI device to them (this is supported for
> backward compatibility). Reducing the bus clock reduces performance,
> so to avoid that it's necessary to keep the 32-bit and 64-bit buses
> separate and thus give each one its own host bridge. As with the
> previous example, all the devices shared the same bus/dev/func
> space, but the only way to learn about the other segment was to
> probe the ACPI tables.
>
> It sounds as if the UEFI PCI host bridge code may be biased towards
> the Intel PCI behavior, though I'm not sure to what extent.
>
> So the comment that you found that says:
>
> // Most systems in the world including complex servers have only one Host Bridge.
>
> Should probably be amended: it should probably say "Most Intel
> systems" and even those systems probably do have more than one host
> bridge (root complex), it's just that it doesn't look like it.
>
> -Bill
>
FWIW, SGI (now HPE) scalable x86 systems typcially implement at least
one PCIe segment per socket. There can be dozens (even hundreds) of
sockets, so there are many root bridges. Only segment zero is
accessible via the port 0xcf8/0xcfc mechanism. The others are
memory-mapped. We have implemented our own PciHostBridge module to
manage them. It uses a single host bridge instance with many root
bridges linked under it.
Brian J. Johnson
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2017-05-31 18:06 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-05-30 16:23 Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform Vladimir Olovyannikov
2017-05-30 16:35 ` Ard Biesheuvel
2017-05-30 16:49 ` Vladimir Olovyannikov
2017-05-30 17:34 ` Bill Paul
2017-05-30 18:11 ` Vladimir Olovyannikov
2017-05-31 15:02 ` Johnson, Brian (EXL - Eagan)
2017-05-31 18:07 ` Kinney, Michael D
2017-05-30 18:32 ` Laszlo Ersek
2017-05-30 18:50 ` Laszlo Ersek
2017-05-30 19:04 ` Vladimir Olovyannikov
2017-05-30 20:17 ` Laszlo Ersek
2017-05-30 20:23 ` Vladimir Olovyannikov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox