WebKit failing to find GLXFBConfig, confusion around fbconfigs + swrast
Daniel Drake
drake at endlessm.com
Mon Aug 27 08:07:54 UTC 2018
Hi,
I'm looking at a strange issue which has taken me across WebKit,
glvnd, mesa and X, and has left me somewhat confused regarding if I've
found any real bugs here, or just expected behaviour (my graphics
knowledge doesn't go far beyond the basics).
The issue:
Under xserver-1.18 + mesa-18.1, on Intel GeminiLake, the
Webkit-powered GNOME online accounts UI shows a blank window (instead
of the web service login UI). The logs show a webkit crash at the same
time, because it doesn't handle a GLXBadFBConfig X error.
On the Webkit side, it is failing to find an appropriate GLXFBConfig
that corresponds to the X visual of the window, which is using a depth
32 RGBA8888 visual. It then ends up passing a NULL config to
glXCreateContextAttribsARB() which results in an error.
Inspecting the available visuals and GLXFBConfigs with glxinfo, I
observe that there is only one visual with depth 32 (the one being
used here), but there isn't even a single GLXFBConfig with depth 32.
Looking on the X server side, I observe the active code that first
deals with the fbconfigs list is glxdriswrast.c __glXDRIscreenProbe,
which is calling into mesa's driSWRastCreateNewScreen() and getting
the available fbconfigs from there.
I then spotted a log message:
(EE) modeset(0): [DRI2] No driver mapping found for PCI device 0x8086 / 0x3184
and then I find hw/xfree86/dri2/pci_ids/i965_pci_ids.h, which (on this
old X) is missing GeminiLake PCI IDs, so I add it there. Now I have my
depth 32 fbconfig with the right visual assigned and webkit works.
Questions:
1. What should webkit be doing in event of it not being to find a
GLXFBConfig that corresponds to the X visual of it's window?
2. Why is swrast coming into the picture? Is swrast being used for rendering?
I was surprised to see that appear in the traces. I had assumed that
with a new enough mesa, I would be avoiding software rendering
codepaths.
I don't think it's using swrast for rendering because I feel like I
would have noticed corresponding slow performance, also even before my
changes glxinfo says:
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) UHD Graphics 605 (Geminilake) (0x3184)
Version: 18.1.6
Accelerated: yes
If swrast is not being used for rendering, why is it being used to
determine what the available fbconfigs are? Is that a bug?
3. Should swrast offer a depth 32 GLXFBConfig?
If I were on a setup that really uses swrast for rendering (e.g. if
mesa doesn't provide an accelerated graphics driver), I assume this
webkit crash would be hit there too, due to not having a depth 32
fbconfig.
Should it have one?
I didn't investigate in detail, but it looks like mesa's
dri_fill_in_modes() (perhaps via its calls down to
llvmpipe_is_format_supported()) declares that depth 32 is not
supported in the swrast codepath.
4. Why is there still a list of PCI IDs in the X server?
I was under the impression that these days, rendering stuff has been
handed off to mesa, and display stuff has been handed off to KMS. Both
the kernel and mesa have corresponding drivers for those functions
(and their own lists of PCI IDs).
I was then surprised to see the X server also maintaining a list of
PCI IDs and it having a significant effect on which codepaths are
followed.
Thanks for any clarifications!
Daniel
More information about the xorg-devel
mailing list