performance of pci_device_get_{vendor, device}_name() in X server startup

Alex Deucher alexdeucher at gmail.com
Wed Jun 9 07:55:59 PDT 2010


2010/6/9 Kristian Høgsberg <krh at bitplanet.net>:
> On Wed, Jun 9, 2010 at 7:23 AM, Daniel Stone <daniel at fooishbar.org> wrote:
>> On Tue, Jun 08, 2010 at 09:40:55PM -0400, Matt Turner wrote:
>>> On Tue, Jun 8, 2010 at 9:35 PM, Richard Barnette
>>> <jrbarnette at chromium.org> wrote:
>>> > Still, cost/benefit matters here:  Essentially, the justification
>>> > for all this work is a debug feature (being able to print the information
>>> > in the log when things go wrong), not a performance enhancement.
>>> > I'm not yet persuaded that that feature is worth the identified effort.
>>>
>>> I'd still like to hear some opinions from people who do serious
>>> xserver work, but from my perspective there's nothing wrong with only
>>> executing this code if -verbose is used. The output of `lspci -vv` is
>>> already a nearly required piece of any bug report, so I don't think
>>> we're losing anything here.
>>
>> Indeed.  We already get a more accurate/useful device/vendor identifier
>> string from the driver, and we don't need to know/care about non-GPU
>> devices.
>>
>> I can see how it would be useful in verbose/error cases, but eh.
>
> Agree, we should be able to just get rid of it in all cases and
> require the driver to log the chipset name if that something the
> driver authors want to see in the log.

I don't think I've ever actually used that functionality in the
xserver.  Just about every driver prints all the info you'd need with
respect to pci.  Most of them also include a local id to string
mapping independent of the xserver anyway.  Dumping it is fine with
me.

Alex


More information about the xorg-devel mailing list