From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yw0-x232.google.com (mail-yw0-x232.google.com [IPv6:2607:f8b0:4002:c05::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 4D88E21A6F106 for ; Tue, 30 May 2017 11:10:47 -0700 (PDT) Received: by mail-yw0-x232.google.com with SMTP id l14so43759763ywk.1 for ; Tue, 30 May 2017 11:11:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:references:in-reply-to:mime-version:thread-index:date :message-id:subject:to:cc; bh=f46vQKa3ghV0J6x2wi4MjQ76M3YeLvwa9IgonHdXQR4=; b=Aa8Fp7umWzP4118gAz0JGMKH/UzS9HDk9+k4bUpOVXAJQWsPY8ukwBumjEwYyGXjjk 51G4Ntr4296S7YVtMfP57arHuo7xO1M4YkukWgnOcNP07lT8CKhHW6d3RWv8D5Un9cD2 zDNjFbqswm9sqaIkhODq+6HqtFzeOV4Z6NVJ0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:references:in-reply-to:mime-version :thread-index:date:message-id:subject:to:cc; bh=f46vQKa3ghV0J6x2wi4MjQ76M3YeLvwa9IgonHdXQR4=; b=XeEtUvCOlvbCqaPdBpmD8birf/OvWzGB6iTQ6Isyh4KHVOlbcyKQ1o0zWigqxM3aHw q49j28qTBWAjYQF5d75SI1Y68NsTdcMpwrhaJJGmutjCekpAZg2xWPNBWJZSvpr7NokT +pdAeuRoqG/0SrpGogU3tsdvsaYLtrcWlRDHA5FofqlKmyO6miEX3UIpy5r9HehRK8Jn XzYMOjcWFmS0yxGko0CFqkEjThV2MnqP2x0UAaOrFbUWFLsSj2H2/Er9Tk9IZNGrGqj4 5NFuHJEfHQYzAdI7LrAO2i+9zJwvM9jIrd+DTNZ0rp724dmxYjxPkpAzs5xk+9p7W/Cq smSg== X-Gm-Message-State: AODbwcDqG11p13F1UfOOrDaI9kdIktG+mqTVrOKpt14irMeptcwgCDb+ 2qbZaqrMmTrcJ86wVprDXlCMB5vNd+bG X-Received: by 10.129.94.195 with SMTP id s186mr18545944ywb.182.1496167905895; Tue, 30 May 2017 11:11:45 -0700 (PDT) From: Vladimir Olovyannikov References: <4220315aed43c05b37b1b71a9eff432e@mail.gmail.com> <730d8b33c76d52366585fd6055562d88@mail.gmail.com> <201705301034.28519.wpaul@windriver.com> In-Reply-To: <201705301034.28519.wpaul@windriver.com> MIME-Version: 1.0 X-Mailer: Microsoft Outlook 14.0 Thread-Index: AQGJgBaRYALxe2tiJNyoWDxrDGFkmQGx0LmjAXZSqDAB2QqQnKJ3zksw Date: Tue, 30 May 2017 11:11:44 -0700 Message-ID: To: Bill Paul , edk2-devel@ml01.01.org Cc: Ard Biesheuvel Subject: Re: Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 May 2017 18:10:47 -0000 Content-Type: text/plain; charset="UTF-8" > -----Original Message----- > From: Bill Paul [mailto:wpaul@windriver.com] > Sent: May-30-17 10:34 AM > To: edk2-devel@ml01.01.org > Cc: Vladimir Olovyannikov; Ard Biesheuvel; edk2-devel@ml01.01.org > Subject: Re: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCIe- > domain platform > > Of all the gin joints in all the towns in all the world, Vladimir Olovyannikov had > to walk into mine at 09:49:16 on Tuesday 30 May 2017 and say: > > > > -----Original Message----- > > > From: Ard Biesheuvel [mailto:ard.biesheuvel@linaro.org] > > > Sent: May-30-17 9:35 AM > > > To: Vladimir Olovyannikov > > > Cc: edk2-devel@lists.01.org > > > Subject: Re: Using a generic PciHostBridgeDxe driver for a > > > multi-PCIe-domain platform > > > > > > On 30 May 2017 at 16:23, Vladimir Olovyannikov > > > > > > wrote: > > > > Hi, > > > > > > > > I've started PCIe stack implementation design for an armv8 aarch64 > > > > platform. > > > > The platform's PCIe represents several host bridges, and each > > > > hostbridge has one rootbridge. > > > > They do not share any resources between each other. > > > > Looking into the PciHostBridgeDxe implementation I can see that it > > > > supports only one hostbridge, and there is a comment: > > > > // Most systems in the world including complex servers have only > > > > one Host Bridge. > > > > > > > > So in my case should I create my own PciHostBridgeDxe driver > > > > supporting multiple hostbridges and do not use the Industry > > > > standard > > > > > > driver? > > > > > > > I am very new to it, and will appreciate any help or idea. > > > > > > As far as I can tell, PciHostBridgeLib allows you to return an > > > arbitrary number of PCI host bridges, each with their own segment > > > number. I haven't tried it myself, but it is worth a try whether > > > returning an array of all host bridges on your platform works as > > > expected. > > > > Thank you Ard, > > Right, but PciHostBridgeDxe seems to work with one hostbridge. > > I am confused that > > > > // Make sure all root bridges share the same ResourceAssigned value > > > > The rootbridges are independent on the platform, and should not share > > anything. Or am I missing anything? > > Anyway, I will try to return multiple hostbridges in the PciHostBridgeLib. > > This may be an Intel-ism. > > Note that for PCIe, I typically refer to "host bridges" as root complexes. > > On PowerPC SoCs that I've worked with (e.g. Freescale/NXP MPC8548, > P2020, P4080, T4240) there are often several root complexes. A typical board > design may have several PCIe slots where each slot is connected to one root > complex in the SoC. Each root complex is therefore the parent of a separate > "segment" > which has its own unique bus/dev/func space. Each root complex has its own > bank of registers to control it, including a separate set of configuration space > access registers. This means you can have multiple PCIe trees each with its > own bus0/dev0/func0 root. There can therefore be several devices with the > same bus/dev/func tuple, but which reside on separate segments. > > The ARMv8 board you're working with is probably set up the same way. I've > only worked with ARM Cortex-A boards and those have all had just one PCIe > root complex, but it stands to reason those that have multiple root > complexes would follow the same pattern as the PPC devices. > > Intel systems can (and often do) also have multiple PCIe root complexes, > however for the sake of backwards compatibility, they all end up sharing the > same configuration space access registers (0xCF8/0xCFC or memory mapped > extended config space) and share a single unified bus/dev/func tree. I see. Thanks for comprehensive explanation. In my case root complexes do not share (there is no need for backward compatibility). So the question is - can I use PciRootBridgeDxe from MdeModulePkg, which operates with "rootbridges" and one rootcomplex(?), or I need to look into the way of creating my own for the platform (say the way Juno driver was designed initially before switching to the generic one)? > > Note that the tree is not always contiguous. For example, I've seen one Intel > board where there was a special PCIe device on bus 128. In the ACPI tables, > there were two PCI "segments" described, the second of which > corresponded to bus 128. There was no switch or bridge to connect bus 128 > to the tree rooted at bus0/dev0/func0, so it would not be possible to > automatically discover it by just walking the bus0/dev0/func0 tree and all its > branches: you needed to use the ACPI hint to know it was there. > > I have also seen cases like this with pre-PCIe systems. For example, I've seen > a Dell server that had both 32-bit and 64-bit PCI buses, where the 64-bit bus > was at bus 1, but was not directly bridged to bus 0 (the 32-bit bus). There was > a reason for this: 64-bit PCIe buses are usually clocked at 66MHz, but will fall > back to 33MHz if you connect a 32-bit PCI device to them (this is supported > for backward compatibility). Reducing the bus clock reduces performance, so > to avoid that it's necessary to keep the 32-bit and 64-bit buses separate and > thus give each one its own host bridge. As with the previous example, all the > devices shared the same bus/dev/func space, but the only way to learn > about the other segment was to probe the ACPI tables. > > It sounds as if the UEFI PCI host bridge code may be biased towards the Intel > PCI behavior, though I'm not sure to what extent. > > So the comment that you found that says: > > // Most systems in the world including complex servers have only one Host > Bridge. > > Should probably be amended: it should probably say "Most Intel systems" > and even those systems probably do have more than one host bridge (root > complex), it's just that it doesn't look like it. > > -Bill > > > Thank you, > > Vladimir > > _______________________________________________ > > edk2-devel mailing list > > edk2-devel@lists.01.org > > https://lists.01.org/mailman/listinfo/edk2-devel > > -- > ========================================================== > =================== > -Bill Paul (510) 749-2329 | Senior Member of Technical Staff, > wpaul@windriver.com | Master of Unix-Fu - Wind River Systems > ========================================================== > =================== > "I put a dollar in a change machine. Nothing changed." - George Carlin > ========================================================== > =================== Thank you, Vladimir