Xserver and Gettimeofday

Xavier Bestel xavier.bestel at free.fr
Wed Aug 29 02:08:40 PDT 2007


On Wed, 2007-08-29 at 08:51 +0200, Brice Goglin wrote:
> Lukas Hejtmanek wrote:
> > Hello,
> >
> > I was playing with some HD streaming and I noticed that XV overlays highly
> > utilize gettimeofday (in particular nvidia closed source driver, but the open
> > source one is even worse) resulting in up to 50% CPU usage spent in kernel in
> > clock_gettime and context switches. 
> >
> > Is there any possible solution for this? I guess that it is just stupid driver
> > architecture that iterates over gettimeofday instead of waiting for IRQ.
> >   
> 
> 
> In OLS 2006, Dave Jones (in his famous talk about why user-space sucks)
> complained about X calling gettimeofday too often (and gettimeofday
> being expensive). Things like mmap'ing /dev/rtc were proposed but didn't
> get merged in Linux in the end. There's a new timerfd syscall in 2.6.22
> which enables blocking on a file descriptor until a timer expires, I
> don't know whether it could help for your problem.

Keith said that the X scheduler consumes gettimeofday() calls to ensure
tasks don't run for more than 20ms. As this kind of time measurement
doesn't seem to require high precision, maybe simply using the TSC (when
available) is enough ?

	Xav





More information about the xorg mailing list