Debugging xserver on Alpha

Michael Cree mcree at orcon.net.nz
Sat Oct 3 16:05:16 PDT 2009


Matt and the xorg-devel maillist,

I think it is time to report where I am with debugging the Xserver on Alpha
and to ask for advice on how to proceed.

Using the xserver 1.7 branch I find that I must wrap _X_EXPORTs around
_alpha_inb et al. else the int10, etc., modules have undefined symbols and
won't load.

With commit c7680befe5ae on the xserver 1.7 branch only support for Alphas
with sparse I/O remains.  I have already sent you and the list a patch that
reenables the code path for Alphas with dense I/O mapping.

I have tested on three Alphas (all have the BWX extension, hence a dense I/O
mapping model) and a number of (mostly older) video cards.  I am still
running a 2.6.30 variant kernel, hence haven't tested KMS.

The xserver 1.7 branch with the two patches I mention above works on an
Compaq Alpha XP1000 (ev67 CPU) with a Radeon 9250 card.  At least I have a
gnome desktop opened on it but haven't done extensive usage testing.

However, on the DEC Alpha PWS600au (an ev56 CPU), I am seeing lockups/kernel
oops with other video cards emanating from the vbe/int10 code.

With a newer Radeon HD2400 I get a kernel oops (which I reported a couple or
so months ago to the linux kernel mail list) which appears to happen in the
int10 code.  Note that this card is not POSTed at boot; it is up to the
xserver to POST it.  This kernel oops was seen with the 1.6 xserver branch
and I haven't tested it since as I've put this video card aside as the video
cards I discuss below seem to offer a better chance of finding the problem
(and the kernel oops is nasty - it corrupted a disc partition on one
occasion).

When I use an old 1997 Matrox card the xserver (1.7 branch) comes up fine.
An examination of the xf86-video-mga driver reveals that it doesn't load
int10 unless there is a request for that in xorg.conf.

When I use an old Sis card (with the xf86-video-sis) driver the int10 code
is loaded but the xserver gets lost in the int10 initialisation code and
eats 100% cpu forever.  Connecting to the X process with gdb reveals
backtraces of the following nature:

0x00000200006507e8 in inline_bwx_inb (addr=<value optimized out>)
    at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:359
359    ../sysdeps/unix/sysv/linux/alpha/ioperm.c: No such file or directory.
    in ../sysdeps/unix/sysv/linux/alpha/ioperm.c
(gdb) bt
#0  0x00000200006507e8 in inline_bwx_inb (addr=<value optimized out>)
    at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:359
#1  dense_inb (addr=<value optimized out>)
    at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:444
#2  0x0000020000650960 in _inb (port=2199033660378)
    at ../sysdeps/unix/sysv/linux/alpha/ioperm.c:826
#3  0x0000000120135008 in _dense_inb (port=2199033660378) at lnx_ev56.c:124
#4  0x0000020000a2a930 in inb (port=986)
    at ../../../hw/xfree86/common/compiler.h:344
#5  x_inb (port=986) at helper_exec.c:333
#6  0x0000020000a34cbc in x86emuOp_in_byte_AL_DX (op1=<value optimized out>)
    at ./../x86emu/ops.c:9737
#7  0x0000020000a45158 in X86EMU_exec () at ./../x86emu/decode.c:122
#8  0x0000020000a2d5f8 in xf86ExecX86int10 (pInt=0x12024e550)
    at xf86x86emu.c:40
#9  0x0000020000a2e8a8 in xf86ExtendedInitInt10 (entityIndex=0,
    Flags=<value optimized out>) at generic.c:285
#10 0x0000020000a10410 in VBEExtendedInit (pInt=0x0, entityIndex=0, Flags=3)
    at vbe.c:68
#11 0x00000200009881b8 in SiS_LoadInitVBE (pScrn=0x120248870)
    at sis_driver.c:2828
#12 0x000002000098d504 in SISPreInit (pScrn=0x120248870,
    flags=<value optimized out>) at sis_driver.c:5996
#13 0x000000012008bef0 in InitOutput (pScreenInfo=0x12022c758, argc=4,
    argv=0x11fd45738) at xf86Init.c:817
#14 0x0000000120024da0 in main (argc=4, argv=0x11fd45738, envp=0x11fd45760)
    at main.c:204

I strongly suspect that the xserver is lost cycling around in the
X86EMU_exec() routine and never exits it.

Since the xserver 1.5 branch works on Alpha and the 1.6 branch doesn't, I
did a diff of the code in the int10, x86emu, os-support/linux, etc.,
directories to search for changes that might prove problematic on Alpha but
I didn't spot anything that looked suspicious.

I see the emu86 code has debugging and disassembling capabilities.  I tried
setting DEBUG in the Makefile in the x86emu directory but discovered I will
probably have to insert a call to X86EMU_trace_on() before any debugging
output will occur.

Am I on the right track here?  How does one go about debugging the int10 and
vbe code?  Since an examination of code changes between a working xserver
and a broken xserver has failed to highlight anything, and the problem seems
to be somewhere in the x86 emulation is the best approach to enable the x86
emulator debugging modes and track exactly what it is doing?  Is there
someone who is happy to give me a bit of guidance on doing this?  I am quite
unfamiliar with the x86 emulator, and while I have programmed in various
assembler languages in the past I am unfamiliar with Intel x86.

Cheers
Michael Cree.



More information about the xorg-devel mailing list