Input thread on EVoC

Fernando Carrijo fcarrijo at yahoo.com.br
Tue Jun 8 17:37:22 PDT 2010


Tiago Vignatti <tiago.vignatti at nokia.com> wrote:
> Hi,
> 
> On Fri, Jun 04, 2010 at 05:28:51AM +0200, ext Fernando Carrijo wrote:
> > 
> > Nevertheless, I wonder how many of Daniel's threefold "really" refer to the
> > inherent complexity of threading event delivery, and how many of them concern
> > the obviously huge amout of mechanical work needed to acquire and release the
> > aforementioned mutex by certain of those routines which encompass the server
> > dispatch tables. Any idea?
> 
> it's not that straightforward given, as the guys said already, X event
> dequeuing is very tied with clients and the server may spend considerable
> amount of time doing the locking/unlocking dance only.

But it is worth trying, right?

> So let's focus first in the input generation code first. And as I said already
> to you in private, I do really care to test performance-wise this new approach
> and stress the efficiency of it. This is something I haven't done at all and
> may consume a lot of time from you if done properly.

Agreed. I have rebased your input thread code upon a local branch based on
xserver master and as soon as I figure out how to solve some issues with the
s3virge driver which serves me at home, I will start benchmarking the X input
subsystem. Some featureful tools come to my mind, like x11perf and perf itself,
but if you know about anything more appropriate, please enlighten me.

> There's also other missing points from that implementation I originally done.
> For instance it's hard to predict which process will get scheduled for the
> CPU - precisely, if the input process will get scheduled at the right moment
> to not.

I fear I couldn't parse what you said above. When you talk about the lack of
predictability, isnt it a natural consequence of us relinquishing the burden of
process scheduling and caring only about client scheduling? Maybe you implied
that it is important for us to offer correcteness of execution by having some
control on thread scheduling?

> I came with one approach of locking ELF segments of the server on
> memory, but maybe this is a cannon to kill a mosquito. We would have to check
> this either.

I didn't even try anything like this before, but if I lived in the desert whith
no one else to ask, mlocking would be my first try. Why do people refrain from
using things like __atribute__(__section__("input_thread_related")) and some
linker trickery, à la ld scripts, to put ELF sections into well known virtual
memory addresses? Lack of portability is the cause, isn't it?

> > Deviating a little from the above: do you think that a multithreaded X server
> > capable of servicing client requests concurrently is a realistic goal for the
> > long run? In particular, do you foresee any possible devilish traps resulted by
> > interactions between threaded event delivery and threaded request processing?
> 
> Hard to say. But definitely start to chop of parts and thread them is one way
> to figure it out :)

Yes. Peter said the same.

To be honest, right now I'm prone to doing this out of EVoC, since it seems that
the board expects some guarantees, specially related to timeline, which I cannot
afford. The reason being that, as I said privately before, I have all the time
in the world, but unfortunatelly not all expertise. Either way, I'm really
really really keen to start exploring and coding, in or out of EVoC. :)

> Thanks,
>              Tiago



More information about the xorg-devel mailing list