Hitting X limit in launching maximum number of vkcube instances

Olivier Fourdan ofourdan at redhat.com
Wed Aug 16 07:46:19 UTC 2023


Hi Anuj,

On Mon, Aug 14, 2023 at 8:27 PM Anuj Phogat <anuj.phogat at gmail.com> wrote:

> - I'm still getting the "_XSERVTransSocketUNIXAccept: accept() failed"
> error
> after launching ~ 256 instances of vkcube. I was able to find the root
> cause of error after patching libxtrans based on Adam's suggestion:
> "_XSERVTransSocketUNIXAccept: accept() failed (Too many open files)"
> - Based on the hint from the error string, I changed the system wide limit
> to
> increase the maximum number of open files allowed from default 1024 to
> 'ulimit -n' confirmed the new limit. But, this change doesn't change
> the error past launching ~ 256 instances of vkcube.
> - I hit the same error but at ~330 instances when I try to run glxgears.
>
> Questions:
> What am I missing when changing the open files limit ?
>

Make sure the X process has inherited that limit, depending on how/where
you bumped the limit, the X server process may not have that applied.

You can check using the /proc filesystem on Linux:

$ cat /proc/$(pidof Xorg)/limits

What else can I try to get past this error ?
>

If you can get 330 instances of glxgears, it means that you are already
past the 256 limit, hence the maxclients worked.

It is worth noting that the limit applies to all X11 clients, so if you run
a window manager, an xterm, etc, everything counts.

Also if a client opens more than one connection to the X server, that also
counts. So you can expect that a limit of 512 might give you a lower actual
limit.


> Should I reopen xserver issue [1] or create a new issue to track it ?
>

I doubt that this is an X server issue.

FWIW, I just tried here with Xwayland rootful and was able to run 474
instances of vkcube on a rootful stanlone Xwayland instance:

$ Xwayland -maxclients 512 -decorate :12 7
$ for i in $(seq 1 512); do DISPLAY=:12 vkcube& done
$ ps aux | grep vkcube | wc -l
474

I have seen some "Maximum number of clients reached" errors while the
vkcube instances were spawning en masse, so I suspect maybe the vulkan
implementation is opening a connection to the display for some reason
temporarily, so having those start all at once may lead to a lower actual
limit of clients..

To confirm that theory, I dedid the same test waiting a bit between each
instance of vkcube.

And nowI can reach the limit of 512:

$ for i in $(seq 1 512); do DISPLAY=:12 vkcube; sleep .2 & done
$ ps aux | grep vkcube | wc -l
512

So yeah, no bug in the Xserver AFAICS.

This is with:

$ cat /proc/$(pidof Xwayland)/limits
Limit                     Soft Limit           Hard Limit           Units

Max cpu time              unlimited            unlimited            seconds

Max file size             unlimited            unlimited            bytes

Max data size             unlimited            unlimited            bytes

Max stack size            8388608              unlimited            bytes

Max core file size        unlimited            unlimited            bytes

Max resident set          unlimited            unlimited            bytes

Max processes             62319                62319
 processes
Max open files            16777216             16777216             files

Max locked memory         8388608              8388608              bytes

Max address space         unlimited            unlimited            bytes

Max file locks            unlimited            unlimited            locks

Max pending signals       62319                62319                signals

Max msgqueue size         819200               819200               bytes

Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us


[1] : https://gitlab.freedesktop.org/xorg/xserver/-/issues/1310
>

Cheers
Olivier
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.x.org/archives/xorg-devel/attachments/20230816/5ae3a8f5/attachment.htm>


More information about the xorg-devel mailing list