The "future" future
Reece Dunn
msclrhd at googlemail.com
Sun Feb 10 03:45:59 PST 2008
On 10/02/2008, Chad <masterclc at gmail.com> wrote:
> I just got done watching the video on YouTube with Keith Packard and
> Bart Massey talking about Xorg. It was really quite interesting,
> which sadly means I'm far geekier than I imagined... :D
It was an interesting video. It's nice to see that X is becoming a
leader again in what can be done on the desktop.
> It got me thinking about video in general and brought me to my question:
>
> When will we see the "future" that we (historically) envision(ed)?
> Maybe not the perfect medium to ask this question, but a good choice
> nonetheless. In many science fiction films we see things like true 3d
> (not 3d on 2d) static "pictures" of people that can be turned on and
> off at the base and when on show a full 3d picture of the person. We
> also see interactive (again true 3d) movies where you can dance with a
> virtual person, or have a conversation with an AI character.
2d interfaces have been around for a long time. Games have been
driving what can be done with 3d graphics. It has only been recently
that some of this has been applied to the desktop - as you say 2d on
3d - with the desktop cube and with flip 3d on Vista.
Games are really innovating on the user interaction side, with 2d HUD
(Heads Up Display) that you can see additional information.
The first step would be to make use of the 3d capabilities and raw
power of modern computers. For applications that have AI/Avatar
elements, those could make use of 3d when rendering the character: for
example, instant messaging clients.
The problem here is how do you make these things 3d? For example,
editing documents, or viewing web pages is a 2d experience, like
reading printed documents. However, if you look at modern web browsers
and document-based applications, they have tabs. The tabs take up 2d
real-estate, but you could envision those tabs layered in 3d. Likewise
for annotations, corrections and modifications on a document. These
are still to a large extent 2d on 3d, but that is the nature of the
application.
When creating UML diagrams, or other diagrams where there are lots of
connections, it would be useful to layer them in 3d, especially for
complex diagrams.
So the first step is moving 2d applications into the 3d space to take
full advantage of the capabilities that this will give you. The next
step, or a parallel step (that we will likely see in games first) is
in real-time ray-traced games.
Moving forward, you then need advances in holographic displays and
imaging. I suspect that ray-tracing will help in the display area, as
it will tell the holographic unit where to display a certain colour.
> In an
> easy-to-relate example, we see Princess Leia relaying a message on a
> 3d projection coming out of R2D2 playing for Luke Skywalker. These
> real life 3d things are what I am pondering. What is the "Missing
> Link" that keeps us from moving away from a flat panel (cause we sure
> have gotten good at making them bigger and better) and into a 3d world
> that is animated *around us*? I somewhat understand light
> technologies and understand we need a surface to reflect off of, but
> with fiber optics I would think we could easily be there by now.
The technologies need to mature, become cheaper (read affordable),
portable and be able to work in real-time.
If you look at ray-tracing as an example. The theory has been known
for a long time. The technology has also been in place for a long
time, but the raw computing power needed to get images rendered is
only just starting to become practical in real-time.
One thing that we need to figure out, and we can do that with current
3d (and future ray-traced) graphics, is to work out the user
interface/interaction. Then, when we move to 3d displays, the
applications and UI will be ready. And for this, we may need to
currently look at games.
- Reece
More information about the xorg
mailing list