[PATCH] Cannot wait for user lock in sync_while_locked

Keith Packard keithp at keithp.com
Sun Mar 4 02:04:27 PST 2012


<#part sign=pgpmime>

On Sat, 03 Mar 2012 23:44:27 -0800, Keith Packard <keithp at keithp.com> wrote:

> I've managed to create a short test case for this problem. It uses
> two threads, one allocating IDs and the other calling XPending. Both
> threads run as fast as possible, each checking to make sure the other
> one is still running and aborting when one gets stuck. After a few
> seconds of run time, the thread calling XPending gets stuck.

(as an aside, here's a fix for the test program):

        > main ()
        > {
        > 	pthread_t	other_thread;
        > 
        
        Initializing the two timeout checks helps here:
        
        	pending_time = alloc_time = time(NULL);
        
        > 	XInitThreads();

Here's what happens when the code runs:

Thread 1:                       Thread 2:                       locking_level
XFreeGC
  _XLockDisplay

/   xmutex_lock
|   _XIDHandler
|     _XAllocIDs
|       _XUserLockDisplay                                               0->1
|       _XUnlockDisplay
\         xmutex_unlock

                                XPending
                                  _XLockDisplay
/                                   xmutex_lock
|                                   _XSeqSyncFunction
|                                     _XReply
\                                       _XUnlockDisplay


/       _XInternalLockDisplay(skip=0)
|       _XUserUnlockDisplay                                             1->0
|   _XSeqSyncFunction
|     _XReply
\       _XUnlockDisplay

/       _XInternalLockDisplay(wskip=1)
|     sync_while_locked
|       _XUserLockDisplay                                               0->1
\       _XUnlockDisplay

/                                       _XInternalLockDisplay(wskip=1)
|                                     sync_while_locked
|                                       _XUserLockDisplay               1->2
\                                       _XUnlockDisplay

/       _XInternalLockDisplay(wskip=0)
|       _XUserUnlockDisplay                                             2->1
\  _XUnlockDisplay

/                                       _XInternalLockDisplay(wskip=0)
|                                         _XDisplayLockWait                     DEADLOCK

At this point, Thread 1 is sitting in locking_thread, but *it no longer
holds a user lock*. When Thread 2 calls _XDisplayLockWait, it is the
only thread with a user-level lock, and yet the special case in
_XDisplayLockWait fails to detect this.

Going back a bit, it seems that the user-level lock code is assuming
that only one thread will ever hold a user-level lock, or at least that
user-level locks are perfectly nesting (the first to lock is the last to
unlock).

Simply blocking when taking the user-level lock from another thread
should suffice to prevent this. This can replace my previous hack, and
should be far safer, although it could reduce potential parallel
execution through the library a bit.

Here's a patch which does that:

commit e49ddbfebd1e2bbe27190f5ca7fa50b34154a402
Author: Keith Packard <keithp at keithp.com>
Date:   Sun Mar 4 02:00:13 2012 -0800

    Block for other threads in _XUserLockDisplay
    
    Wait for all other threads to release the user-level lock when
    acquiring it. This ensures that only one thread at a time holds the
    user-level lock, necessary as it is a nesting lock and a single
    variable is used to determine when the lock is nesting and when it is
    contended.
    
    Signed-off-by: Keith Packard <keithp at keithp.com>

diff --git a/src/locking.c b/src/locking.c
index 4f9a40f..b3dfb3b 100644
--- a/src/locking.c
+++ b/src/locking.c
@@ -486,6 +486,8 @@ static void _XInternalLockDisplay(
 static void _XUserLockDisplay(
     register Display* dpy)
 {
+    _XDisplayLockWait(dpy);
+
     if (++dpy->lock->locking_level == 1) {
        dpy->lock->lock_wait = _XDisplayLockWait;
        dpy->lock->locking_thread = xthread_self();

-- 
keith.packard at intel.com


More information about the xorg-devel mailing list