glib dependency for the X Server
otaylor at redhat.com
Mon Apr 3 17:33:41 PDT 2006
On Mon, 2006-04-03 at 20:26 +0100, Alan Cox wrote:
> On Llu, 2006-04-03 at 14:00 -0400, Ray Strode wrote:
> > would still abort()). On linux with the default over committing
> > behavior enabled the error checking code is pretty worthless too,
> > because the kernel will just kill programs randomly on Out of memory
> > instead of return NULL from a malloc.
> Its attitudes like that and sloppy programming by desktop programmers
> that cause exactly these problems.
[ Way off topic, sorry for the noise ]
There's a certain warm and virtuous feeling that comes when you've
covered all the corner cases, documented ever possible error condition,
written test cases, and can say with confidence that whatever happens
its someone else's fault. I love writing code like that.
But the cost of handling out of memory conditions is frequently the
legibility and thus verifiability of code. Untested code paths are buggy
code paths. Virtually nobody writing desktop software tests their out of
memory code paths. (D-BUS is one of the few exceptions I know to that.)
If you read through some of the detailed reports from the recent
Coverity scan, a large percentage, perhaps even *most* of the bugs
caught were on out-of-memory code paths.
In the end, I'm glad that when I'm in a swap swarm, the X server is
pretty robust against running out of memory. I'm much more grateful
that my filesystem is robust against that. But would the world
be a better a better place if every GTK+ method, every Qt method (*)
could fail with an out-of-memory error? I really doubt it.
If we want the desktop to work well on low memory machines, the
answer isn't to try to fix 100,000 code paths that might get triggered
after the users machine sits there swapping for 15 minutes, it's to
fix leaks, fight bloat, make Evolution, make Firefox, realize: "Wait,
this is a 128M machine, maybe using 160MB of memory to cache mail
folders / images / whatever" isn't a good idea.
This is all pretty irrelevant to X. What X is supposed to do on
out of memory was set in stone long ago, and since the option of
"fail the operation, return a BadAlloc error to the client" is
there most of the time for X, it's usually not even that complex
to keep going.
(*) Actually Qt probably throws std::bad_alloc back and catches it in
the mainloop nowdays... If you've ever seen a complex C++ program or
Java program begin to run out of memory in this way, you know that
*correctness* is very unlikely to result. But it's certainly a less
cumbersome way of giving the user a wing-and-a-prayer chance of
being able to save before crashing. And the saved copy might not
even be corrupt.
More information about the xorg