3D X

Russell Shaw rjshaw at netspace.net.au
Mon Apr 3 18:47:45 PDT 2006


Deron Johnson wrote:
> 
> Russell Shaw wrote On 03/31/06 21:28,:
> 
>>Hi,
>>To get everything to tilt at an angle, you just add an X protocol to
>>set a server transform to give whatever zoom or rotation to be applied
>>to everything on the X screen. Also, XCreateWindow would take a "z"
>>dimension for its distance "above" its parent. The total work to do that
>>in the X server would be pretty straight forward.
> 
> One could add 3D effects to the X protocol piece meal, but it is much
> more effective to have composite managers use OpenGL. This opens up a
> wide variety of interesting visual effects, both 3D and 2D.

Hi,
Whether 3D transform effects were added to the X protocol, or inherently
available by implementing all the X windows using openGL in the server,
i was wondering why there should be any complexity with window interaction,
even if they are alpha blended (transparency effects).

> With the composite extension and OpenGL it is relatively straightforward
> to implement a system which will let you slant windows. But if you want
> to interact with the window while it is slanted, that is where the true
> complexity comes in. Fortunately, we have a plan for an X standard
> method of dealing with this. The Coordinate Transform Redirection
> extension will place the X server's event transforms under the control
> of the composite manager, who will be able to convert between X's
> 2D coordinate space and the 3D desktop coordinate space.

That seems very simple to me, assuming that "clicking" a window is
seen as a "ray" extending from the mouse cursor, aligned with the Z
axis until it hits the first window that has registered for mouse events.
2D->3D transformation is no big deal.

I realize now the users mouse clicks cannot simply "travel" down the Z axis,
because if the screen is rotated, the user can't see what is "under" the
mouse cursor. Therefore, mouse events are when the cursor is over something
in the users point of view (screen-normal V axis).

The only extra thing that isn't in a 2D system is that when the X screen
is rotated, things that weren't overlapping before, may be overlapping *now*,
from the users view perspective (V axis, normal to the physical screen).

Obviously, when a screen region is "dirtied", the server will need to
generate "expose" events for all windows under that region using the
Normal V axis.

Am i right? All this seems a very "hardwired", logical, simple behaviour
that doesn't require much of an intelligent "compositing manager" per se.



More information about the xorg mailing list