public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>
To: Bill Paul <wpaul@windriver.com>, edk2-devel@ml01.01.org
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform
Date: Tue, 30 May 2017 11:11:44 -0700	[thread overview]
Message-ID: <c5d48dfcb30f102cb56164dad6051718@mail.gmail.com> (raw)
In-Reply-To: <201705301034.28519.wpaul@windriver.com>

> -----Original Message-----
> From: Bill Paul [mailto:wpaul@windriver.com]
> Sent: May-30-17 10:34 AM
> To: edk2-devel@ml01.01.org
> Cc: Vladimir Olovyannikov; Ard Biesheuvel; edk2-devel@ml01.01.org
> Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a
multi-PCIe-
> domain platform
>
> Of all the gin joints in all the towns in all the world, Vladimir
Olovyannikov had
> to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say:
>
> > > -----Original Message-----
> > > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org]
> > > Sent: May-30-17 9:35 AM
> > > To: Vladimir Olovyannikov
> > > Cc: edk2-devel@lists.01.org
> > > Subject: Re: Using a generic PciHostBridgeDxe driver for a
> > > multi-PCIe-domain platform
> > >
> > > On 30 May 2017 at 16:23, Vladimir Olovyannikov
> > >
> > > <vladimir.olovyannikov@broadcom.com> wrote:
> > > > Hi,
> > > >
> > > > I've started PCIe stack implementation design for an armv8 aarch64
> > > > platform.
> > > > The platform's PCIe represents several host bridges, and each
> > > > hostbridge has one rootbridge.
> > > > They do not share any resources between each other.
> > > > Looking into the PciHostBridgeDxe implementation I can see that it
> > > > supports only one hostbridge, and there is a comment:
> > > > // Most systems in the world including complex servers have only
> > > > one Host Bridge.
> > > >
> > > > So in my case should I create my own PciHostBridgeDxe driver
> > > > supporting multiple hostbridges and do not use the Industry
> > > > standard
> > >
> > > driver?
> > >
> > > > I am very new to it, and will appreciate any help or idea.
> > >
> > > As far as I can tell, PciHostBridgeLib allows you to return an
> > > arbitrary number of PCI host bridges, each with their own segment
> > > number. I haven't tried it myself, but it is worth a try whether
> > > returning an array of all host bridges on your platform works as
> > > expected.
> >
> > Thank you Ard,
> > Right, but PciHostBridgeDxe seems to work with one hostbridge.
> > I am confused that
> >
> > // Make sure all root bridges share the same ResourceAssigned value
> >
> > The rootbridges are independent on the platform, and should not share
> > anything. Or am I missing anything?
> > Anyway, I will try to return multiple hostbridges in the
PciHostBridgeLib.
>
> This may be an Intel-ism.
>
> Note that for PCIe, I typically refer to "host bridges" as root
complexes.
>
> On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548,
> P2020, P4080, T4240) there are often several root complexes. A typical
board
> design may have several PCIe slots where each slot is connected to one
root
> complex in the SoC. Each root complex is therefore the parent of a
separate
> "segment"
> which has its own unique bus/dev/func space. Each root complex has its
own
> bank of registers to control it, including a separate set of
configuration space
> access registers. This means you can have multiple PCIe trees each with
its
> own bus0/dev0/func0 root. There can therefore be several devices with
the
> same bus/dev/func tuple, but which reside on separate segments.
>
> The ARMv8 board you're working with is probably set up the same way.
I've
> only worked with ARM Cortex-A boards and those have all had just one
PCIe
> root complex, but it stands to reason those that have multiple root
> complexes would follow the same pattern as the PPC devices.
>
> Intel systems can (and often do) also have multiple PCIe root complexes,
> however for the sake of backwards compatibility, they all end up sharing
the
> same configuration space access registers (0xCF8/0xCFC or memory mapped
> extended config space) and share a single unified bus/dev/func tree.
I see. Thanks for comprehensive explanation.
In my case root complexes do not share (there is no need for backward
compatibility).
So the question is - can I use PciRootBridgeDxe from MdeModulePkg, which
operates with "rootbridges" and one rootcomplex(?),
or I need to look into the way of creating my own for the platform (say
the way Juno driver was designed initially before switching to the generic
one)?
>
> Note that the tree is not always contiguous. For example, I've seen one
Intel
> board where there was a special PCIe device on bus 128. In the ACPI
tables,
> there were two PCI "segments" described, the second of which
> corresponded to bus 128. There was no switch or bridge to connect bus
128
> to the tree rooted at bus0/dev0/func0, so it would not be possible to
> automatically discover it by just walking the bus0/dev0/func0 tree and
all its
> branches: you needed to use the ACPI hint to know it was there.
>
> I have also seen cases like this with pre-PCIe systems. For example,
I've seen
> a Dell server that had both 32-bit and 64-bit PCI buses, where the
64-bit bus
> was at bus 1, but was not directly bridged to bus 0 (the 32-bit bus).
There was
> a reason for this: 64-bit PCIe buses are usually clocked at 66MHz, but
will fall
> back to 33MHz if you connect a 32-bit PCI device to them (this is
supported
> for backward compatibility). Reducing the bus clock reduces performance,
so
> to avoid that it's necessary to keep the 32-bit and 64-bit buses
separate and
> thus give each one its own host bridge. As with the previous example,
all the
> devices shared the same bus/dev/func space, but the only way to learn
> about the other segment was to probe the ACPI tables.
>
> It sounds as if the UEFI PCI host bridge code may be biased towards the
Intel
> PCI behavior, though I'm not sure to what extent.
>
> So the comment that you found that says:
>
> // Most systems in the world including complex servers have only one
Host
> Bridge.
>
> Should probably be amended: it should probably say "Most Intel systems"
> and even those systems probably do have more than one host bridge (root
> complex), it's just that it doesn't look like it.
>
> -Bill
>
> > Thank you,
> > Vladimir
> > _______________________________________________
> > edk2-devel mailing list
> > edk2-devel@lists.01.org
> > https://lists.01.org/mailman/listinfo/edk2-devel
>
> --
> ==========================================================
> ===================
> -Bill Paul            (510) 749-2329 | Senior Member of Technical Staff,
>                  wpaul@windriver.com | Master of Unix-Fu - Wind River
Systems
> ==========================================================
> ===================
>    "I put a dollar in a change machine. Nothing changed." - George
Carlin
> ==========================================================
> ===================
Thank you,
Vladimir


  reply	other threads:[~2017-05-30 18:10 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-30 16:23 Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform Vladimir Olovyannikov
2017-05-30 16:35 ` Ard Biesheuvel
2017-05-30 16:49   ` Vladimir Olovyannikov
2017-05-30 17:34     ` Bill Paul
2017-05-30 18:11       ` Vladimir Olovyannikov [this message]
2017-05-31 15:02       ` Johnson, Brian (EXL - Eagan)
2017-05-31 18:07         ` Kinney, Michael D
2017-05-30 18:32 ` Laszlo Ersek
2017-05-30 18:50   ` Laszlo Ersek
2017-05-30 19:04     ` Vladimir Olovyannikov
2017-05-30 20:17       ` Laszlo Ersek
2017-05-30 20:23         ` Vladimir Olovyannikov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c5d48dfcb30f102cb56164dad6051718@mail.gmail.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox