3D X
Deron Johnson
Deron.Johnson at Sun.COM
Tue Apr 4 10:00:37 PDT 2006
Russell Shaw wrote On 04/03/06 18:47,:
> Whether 3D transform effects were added to the X protocol, or inherently
> available by implementing all the X windows using openGL in the server,
> i was wondering why there should be any complexity with window interaction,
> even if they are alpha blended (transparency effects).
No, transparent windows don't introduce much complexity with window
interaction, other than the fact that we have found that transparency
needs to be used very judiciously. If used too much (e.g. transparent
menus) it can cause great confusion for the user.
> That seems very simple to me, assuming that "clicking" a window is
> seen as a "ray" extending from the mouse cursor, aligned with the Z
> axis until it hits the first window that has registered for mouse events.
> 2D->3D transformation is no big deal.
Correct. The process of firing the ray and computing the intersection is
the most trivial part of the process. What turns out to be very
complicated is to properly synchronize 2D and 3D events together. (2D
events are events which hit X windows and 3D events are events which hit
pure 3D objects). Most of this complexity stems from the inherent
complexity of the X11 server grab code. I have a preliminary
implementation which works for most cases (see the lg3d-dev branch of xc
and look for #ifdef LG3D, mostly in dix/events.c) but Keith and I are
now working toward a more robust version of this called Coordinate
Transform Redirection (CTR). We agreed on the xorg-arch alias several
months ago that it was unwise to try to shoe horn the 3D scene graph
inside the X server, but rather, an external client (aka the composite
manager) should manage the screen graph. Therefore, what needs to be
done to handle input events in a 3D environment is that whenever the X
server needs to know any transform information (for events, for query
pointer, etc.) it needs to ask the composite manager. The composite
manager will perform the necessary scene graph pick and send the
resulting information back to the X server. We are currently planning to
implement the first cut of this for release 7.2.
> I realize now the users mouse clicks cannot simply "travel" down the Z axis,
> because if the screen is rotated, the user can't see what is "under" the
> mouse cursor. Therefore, mouse events are when the cursor is over something
> in the users point of view (screen-normal V axis).
>
> The only extra thing that isn't in a 2D system is that when the X screen
> is rotated, things that weren't overlapping before, may be overlapping *now*,
> from the users view perspective (V axis, normal to the physical screen).
>
> Obviously, when a screen region is "dirtied", the server will need to
> generate "expose" events for all windows under that region using the
> Normal V axis.
>
> Am i right? All this seems a very "hardwired", logical, simple behaviour
> that doesn't require much of an intelligent "compositing manager" per se.
All I can say is that what seems simple on the surface is actually very
complicated underneath the covers. I have been working on developing a
3D window system for 2.5 years now, and while I've had considerable
success it has been by no means "simple."
More information about the xorg
mailing list