From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 4B10721969F9A for ; Wed, 31 May 2017 11:06:09 -0700 (PDT) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 May 2017 11:07:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,275,1493708400"; d="scan'208";a="1155017980" Received: from orsmsx109.amr.corp.intel.com ([10.22.240.7]) by fmsmga001.fm.intel.com with ESMTP; 31 May 2017 11:07:09 -0700 Received: from orsmsx161.amr.corp.intel.com (10.22.240.84) by ORSMSX109.amr.corp.intel.com (10.22.240.7) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 31 May 2017 11:07:08 -0700 Received: from orsmsx113.amr.corp.intel.com ([169.254.9.128]) by ORSMSX161.amr.corp.intel.com ([169.254.4.220]) with mapi id 14.03.0319.002; Wed, 31 May 2017 11:07:07 -0700 From: "Kinney, Michael D" To: "Johnson, Brian (EXL - Eagan)" , Bill Paul , "edk2-devel@ml01.01.org" , "Kinney, Michael D" CC: "edk2-devel@ml01.01.org" , Vladimir Olovyannikov , Ard Biesheuvel Thread-Topic: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform Thread-Index: AdLZYR9ewBqB61K0QdOOS1q4LXjSeQAPEvkAAAB7YgAAAZQfAAAs+MmAAAiIeSA= Date: Wed, 31 May 2017 18:07:07 +0000 Message-ID: References: <4220315aed43c05b37b1b71a9eff432e@mail.gmail.com> <730d8b33c76d52366585fd6055562d88@mail.gmail.com> <201705301034.28519.wpaul@windriver.com> In-Reply-To: Accept-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ctpclassification: CTP_IC x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMzJjNjg4NDItODAwMS00NmEwLTk3OWUtZWQxYzU4YTJhZGU0IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6Im1zQURZVGtScWpIZEpieFwvTXFmV0tua1Vsbk9tTTFxRDFWK0RXMUtBaVZvPSJ9 dlp-product: dlpe-windows dlp-version: 10.0.102.7 dlp-reaction: no-action x-originating-ip: [10.22.254.139] MIME-Version: 1.0 Subject: Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 May 2017 18:06:09 -0000 Content-Language: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable There may also be some PCI terminology differences. The EDK II implements the PI Specifications. Latest PI version: http://www.uefi.org/sites/default/files/resources/PI%201.5.zip Volume 5 - Chapter 10 PCI Host Bridge provides the terminology that is used for the EDK II implementation. Specifically, Section 10.4 provides a discussion of the following example PCI Architectures: * Desktop system with 1 PCI root bridge * Server system with 4 PCI root bridges * Server system with 2 PCI segments * Server system with 2 PCI host buses One additional element I have noticed, is the difference between a HW description of the platform's PCI subsystem, and the SW view of the=20 platform's PCI subsystem. Mike -----Original Message----- From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of John= son, Brian (EXL - Eagan) Sent: Wednesday, May 31, 2017 8:02 AM To: Bill Paul ; edk2-devel@ml01.01.org Cc: edk2-devel@ml01.01.org; Vladimir Olovyannikov ; Ard Biesheuvel Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCI= e-domain platform -----Original Message----- From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Bill= Paul Sent: Tuesday, May 30, 2017 12:34 PM To: edk2-devel@ml01.01.org Cc: edk2-devel@ml01.01.org; Vladimir Olovyannikov ; Ard Biesheuvel Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCI= e-domain platform Of all the gin joints in all the towns in all the world, Vladimir Olovyanni= kov had to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say: > > -----Original Message----- > > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org] > > Sent: May-30-17 9:35 AM > > To: Vladimir Olovyannikov > > Cc: edk2-devel@lists.01.org > > Subject: Re: Using a generic PciHostBridgeDxe driver for a=20 > > multi-PCIe-domain platform > >=20 > > On 30 May 2017 at 16:23, Vladimir Olovyannikov > >=20 > > wrote: > > > Hi, > > >=20 > > > I've started PCIe stack implementation design for an armv8 aarch64=20 > > > platform. > > > The platform's PCIe represents several host bridges, and each=20 > > > hostbridge has one rootbridge. > > > They do not share any resources between each other. > > > Looking into the PciHostBridgeDxe implementation I can see that it=20 > > > supports only one hostbridge, and there is a comment: > > > // Most systems in the world including complex servers have only=20 > > > one Host Bridge. > > >=20 > > > So in my case should I create my own PciHostBridgeDxe driver=20 > > > supporting multiple hostbridges and do not use the Industry=20 > > > standard > >=20 > > driver? > >=20 > > > I am very new to it, and will appreciate any help or idea. > >=20 > > As far as I can tell, PciHostBridgeLib allows you to return an=20 > > arbitrary number of PCI host bridges, each with their own segment=20 > > number. I haven't tried it myself, but it is worth a try whether=20 > > returning an array of all host bridges on your platform works as=20 > > expected. >=20 > Thank you Ard, > Right, but PciHostBridgeDxe seems to work with one hostbridge. > I am confused that >=20 > // Make sure all root bridges share the same ResourceAssigned value >=20 > The rootbridges are independent on the platform, and should not share=20 > anything. Or am I missing anything? > Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib= . This may be an Intel-ism. Note that for PCIe, I typically refer to "host bridges" as root complexes. On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548, P2020, P= 4080, T4240) there are often several root complexes. A typical board design= may have several PCIe slots where each slot is connected to one root compl= ex in the SoC. Each root complex is therefore the parent of a separate "seg= ment"=20 which has its own unique bus/dev/func space. Each root complex has its own = bank of registers to control it, including a separate set of configuration = space access registers. This means you can have multiple PCIe trees each wi= th its own bus0/dev0/func0 root. There can therefore be several devices wit= h the same bus/dev/func tuple, but which reside on separate segments. The ARMv8 board you're working with is probably set up the same way. I've o= nly worked with ARM Cortex-A boards and those have all had just one PCIe ro= ot complex, but it stands to reason those that have multiple root complexes= would follow the same pattern as the PPC devices. Intel systems can (and often do) also have multiple PCIe root complexes, ho= wever for the sake of backwards compatibility, they all end up sharing the = same configuration space access registers (0xCF8/0xCFC or memory mapped ext= ended config space) and share a single unified bus/dev/func tree. Note that the tree is not always contiguous. For example, I've seen one Int= el board where there was a special PCIe device on bus 128. In the ACPI tabl= es, there were two PCI "segments" described, the second of which correspond= ed to bus 128. There was no switch or bridge to connect bus 128 to the tree= rooted at bus0/dev0/func0, so it would not be possible to automatically di= scover it by just walking the bus0/dev0/func0 tree and all its branches: yo= u needed to use the ACPI hint to know it was there. I have also seen cases like this with pre-PCIe systems. For example, I've s= een a Dell server that had both 32-bit and 64-bit PCI buses, where the 64-b= it bus was at bus 1, but was not directly bridged to bus 0 (the 32-bit bus)= . There was a reason for this: 64-bit PCIe buses are usually clocked at 66M= Hz, but will fall back to 33MHz if you connect a 32-bit PCI device to them = (this is supported for backward compatibility). Reducing the bus clock redu= ces performance, so to avoid that it's necessary to keep the 32-bit and 64-= bit buses separate and thus give each one its own host bridge. As with the = previous example, all the devices shared the same bus/dev/func space, but t= he only way to learn about the other segment was to probe the ACPI tables. It sounds as if the UEFI PCI host bridge code may be biased towards the Int= el PCI behavior, though I'm not sure to what extent. So the comment that you found that says: // Most systems in the world including complex servers have only one Host B= ridge. Should probably be amended: it should probably say "Most Intel systems" and= even those systems probably do have more than one host bridge (root comple= x), it's just that it doesn't look like it. -Bill =20 > Thank you, > Vladimir > _______________________________________________ > edk2-devel mailing list > edk2-devel@lists.01.org > https://lists.01.org/mailman/listinfo/edk2-devel -- =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D -Bill Paul (510) 749-2329 | Senior Member of Technical Staff, wpaul@windriver.com | Master of Unix-Fu - Wind River Syste= ms =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D "I put a dollar in a change machine. Nothing changed." - George Carlin = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-develOn Tuesday, May 30, 2017 Bi= ll Paul wrote: > Of all the gin joints in all the towns in all the world, Vladimir Olovyan= nikov had to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say: > > > -----Original Message----- > > > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org] > > > Sent: May-30-17 9:35 AM > > > To: Vladimir Olovyannikov > > > Cc: edk2-devel@lists.01.org > > > Subject: Re: Using a generic PciHostBridgeDxe driver for a=20 > > > multi-PCIe-domain platform > > >=20 > > > On 30 May 2017 at 16:23, Vladimir Olovyannikov > > >=20 > > > wrote: > > > > Hi, > > > >=20 > > > > I've started PCIe stack implementation design for an armv8 aarch64= =20 > > > > platform. > > > > The platform's PCIe represents several host bridges, and each=20 > > > > hostbridge has one rootbridge. > > > > They do not share any resources between each other. > > > > Looking into the PciHostBridgeDxe implementation I can see that it= =20 > > > > supports only one hostbridge, and there is a comment: > > > > // Most systems in the world including complex servers have only=20 > > > > one Host Bridge. > > > >=20 > > > > So in my case should I create my own PciHostBridgeDxe driver=20 > > > > supporting multiple hostbridges and do not use the Industry=20 > > > > standard > > >=20 > > > driver? > > >=20 > > > > I am very new to it, and will appreciate any help or idea. > > >=20 > > > As far as I can tell, PciHostBridgeLib allows you to return an=20 > > > arbitrary number of PCI host bridges, each with their own segment=20 > > > number. I haven't tried it myself, but it is worth a try whether=20 > > > returning an array of all host bridges on your platform works as=20 > > > expected. > >=20 > > Thank you Ard, > > Right, but PciHostBridgeDxe seems to work with one hostbridge. > > I am confused that > >=20 > > // Make sure all root bridges share the same ResourceAssigned value > >=20 > > The rootbridges are independent on the platform, and should not share=20 > > anything. Or am I missing anything? > > Anyway, I will try to return multiple hostbridges in the PciHostBridgeL= ib. >=20 > This may be an Intel-ism. >=20 > Note that for PCIe, I typically refer to "host bridges" as root > complexes. >=20 > On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548, > P2020, P4080, T4240) there are often several root complexes. A > typical board design may have several PCIe slots where each slot is > connected to one root complex in the SoC. Each root complex is > therefore the parent of a separate "segment" which has its own > unique bus/dev/func space. Each root complex has its own bank of > registers to control it, including a separate set of configuration > space access registers. This means you can have multiple PCIe trees > each with its own bus0/dev0/func0 root. There can therefore be > several devices with the same bus/dev/func tuple, but which reside > on separate segments. >=20 > The ARMv8 board you're working with is probably set up the same > way. I've only worked with ARM Cortex-A boards and those have all > had just one PCIe root complex, but it stands to reason those that > have multiple root complexes would follow the same pattern as the > PPC devices. >=20 > Intel systems can (and often do) also have multiple PCIe root > complexes, however for the sake of backwards compatibility, they all > end up sharing the same configuration space access registers > (0xCF8/0xCFC or memory mapped extended config space) and share a > single unified bus/dev/func tree. >=20 > Note that the tree is not always contiguous. For example, I've seen > one Intel board where there was a special PCIe device on bus 128. In > the ACPI tables, there were two PCI "segments" described, the second > of which corresponded to bus 128. There was no switch or bridge to > connect bus 128 to the tree rooted at bus0/dev0/func0, so it would > not be possible to automatically discover it by just walking the > bus0/dev0/func0 tree and all its branches: you needed to use the > ACPI hint to know it was there. >=20 > I have also seen cases like this with pre-PCIe systems. For example, > I've seen a Dell server that had both 32-bit and 64-bit PCI buses, > where the 64-bit bus was at bus 1, but was not directly bridged to > bus 0 (the 32-bit bus). There was a reason for this: 64-bit PCIe > buses are usually clocked at 66MHz, but will fall back to 33MHz if > you connect a 32-bit PCI device to them (this is supported for > backward compatibility). Reducing the bus clock reduces performance, > so to avoid that it's necessary to keep the 32-bit and 64-bit buses > separate and thus give each one its own host bridge. As with the > previous example, all the devices shared the same bus/dev/func > space, but the only way to learn about the other segment was to > probe the ACPI tables. >=20 > It sounds as if the UEFI PCI host bridge code may be biased towards > the Intel PCI behavior, though I'm not sure to what extent. >=20 > So the comment that you found that says: >=20 > // Most systems in the world including complex servers have only one Host= Bridge. >=20 > Should probably be amended: it should probably say "Most Intel > systems" and even those systems probably do have more than one host > bridge (root complex), it's just that it doesn't look like it. >=20 > -Bill > =20 FWIW, SGI (now HPE) scalable x86 systems typcially implement at least one PCIe segment per socket. There can be dozens (even hundreds) of sockets, so there are many root bridges. Only segment zero is accessible via the port 0xcf8/0xcfc mechanism. The others are memory-mapped. We have implemented our own PciHostBridge module to manage them. It uses a single host bridge instance with many root bridges linked under it. Brian J. Johnson _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel