[PATCH libpciaccess] Support for 32-bit domains

Mark Kettenis mark.kettenis at xs4all.nl
Thu Aug 11 09:16:15 UTC 2016


> Date: Wed, 10 Aug 2016 22:58:34 +0000
> From: Keith Busch <keith.busch at intel.com>
> 
> On Thu, Aug 11, 2016 at 12:17:48AM +0200, Mark Kettenis wrote:
> > > From: Keith Busch <keith.busch at intel.com>
> > > Date: Tue,  9 Aug 2016 15:39:35 -0600
> > > 
> > > A pci "domain" is a purely software construct, and need not be limited
> > > to the 16-bit ACPI defined segment. The Linux kernel currently supports
> > > 32-bit domains, so this patch matches up with those capabilities to make
> > > it usable on systems exporting such domains.
> > 
> > Well, yes, and no.  PCI domains really are a hardware property.  There
> > are systems out there that have multiple PCI host bridges, each with
> > their own separate config/mem/io address spaces and bus numbers
> > starting with 0 for the root of the PCI bus hierarchy.  Pretty much
> > any 64-bit SPARC system falls into this category, and I've seen
> > PA-RISC and POWER systems with such a hardware configuration as well.
> > And given that HP's Itanium line developed from their PA-RISC hardware
> > I expect them to be in the same boat.  There is no domain numering
> > scheme that is implied by the hardware though, do domain numbers are
> > indeed purely a software construct.  On OpenBSD we simply number the
> > domains sequentially.  So 16 bits are more than enough.
> > 
> > The Linux kernel could do the same with ACPI segments (which may or
> > may not map onto true PCI domains).  That would remove the need to
> > change te libpciaccess ABI.  Although I can see that having a 1:1
> > mapping of ACPI segments to domains is something that is nice to have.
> 
> I can give a little more background on where this is coming from. The
> Intel x86 Skylake E-Series has an option to provide a number of additional
> "host bridges". The "vmd" driver in the Linux mainline kernel supports
> this hardware.
> 
> For better or worse, Linux does match the segment number to the
> domain. The "vmd" hardware is not a segment though, and decoupling _SEG
> from domain numbers in the Linux kernel proved difficult and unpopular
> with the devs. To avoid the potential clash from letting vmd hardware
> occupy the same range that an ACPI _SEG could define, we let VMD start
> at 0x10000.
> 
> I've already patched pci-utils (provides lspci, setpci) to allow
> this, but I missed this library at the time (my dev machines are all
> no-graphics). Right now, a system with VMD segfaults startx. I believe
> it's down to the error handling that frees the pci devices and sets
> pci_system->devices to NULL. It looks like this is dereferenced later,
> but I'm very unfamiliar with the code base and not sure which repo to
> look into.
> 
> If preserving libpciaccess ABI is of high importance, I think the only
> other option is to just ignore domains requiring 32-bits.  That should
> be okay for us since X should not need the devices in these domains
> anyway. I'll send a patch for consideration.

To be honest, bumping the shared library major is perfectly fine with
me.  The current "thou shalt never bump the shared library major"
mantra that seems to has taken hold of the Linux community makes no
sense.  Why have a shared library major at all if you can never bump
it?

In any case the impact of bumping the libpciaccess shared library
should be fairly limited as it's not widely used outside of X.  But I
fear it does affect the driver API.

> > However, I do believe that ACPI segments are actually encoded as
> > 64-bit integers.  So such a 1:1 mapping may not be achievable.
> 
> True, though only 16 bits are used (ACPI 6, section 6.5.6):
> 
>   "
>   The lower 16 bits of _SEG returned integer is the PCI Segment Group number.
>   Other bits are reserved.
>   "

At least whoever designed the libpciaccess interface had some reason
to pick a 16-bit integer for the domain.



More information about the xorg-devel mailing list