Resolution indpendence

olafBuddenhagen at gmx.net olafBuddenhagen at gmx.net
Wed Jul 2 23:38:29 PDT 2008


Hi,

On Fri, Jun 27, 2008 at 01:32:18PM -0400, Behdad Esfahbod wrote:

> There are two points of physical information:
> 
>   A) Dots per inch on the display surface (LCD panel, TV screen,
>   projector screen, The Wall, ...)
> 
>   B) Viewing distance
> 
> Those two are very real and can be measured.  If we have both, we can
> compute a third value:
> 
>   C) Normalized dpi / angular resolution / whatever you call it.
>   Physical dpi times viewing distance does the job.
> 
> 
> At the end, C is all the application developers care about.  That's
> why I suggest we redefine application DPIs to be that.
> 
> Next question is where to get A and B from.  A is already coming from
> X and EDID info and many devices have buggy values.  B is nonexistent.
> The solution to both is already there: HAL device info files.  These
> are small XML files setting A and a default value for B depending on
> the manufacturer and model of the display device.  The user can set
> both. This is just about defaults.

First of all, we do usually have two information points: Resolution and
display size. From these, I think we can make a pretty good guess at the
viewing distance/angular resolution -- no need for an enormous database.

The viewing distance normally depends mostly on the display size. For
anything from the size of a desktop monitor upwards, it should usually
be roughly proportional to the display size. For smaller displays, it
will obviously not be proportional, but rather approximate some minimum
realistic distance, say 25 cm or so.

The resolution also plays some role: With a higher resolution display we
will generally tend to get closer so we see better, while with a lower
resolution one there is no use so we won't.

Of course, this relation isn't linear either. With higher resolutions,
the display size becomes more dominant.

Note however that the viewing distance/angular resolution alone is not
really an ideal base. To determine the optimal effective resolution used
for size calculations, there are other factors to take into account.

For one, while I do not agree with Glynn that for today's typical
resolution it's *only* the pixel grid matters, it's certainly true that
with a better resolution, things remain legible at a slightly smaller
physical size. (And in fact subjectively look bigger than at a lower
resolution.) The truth is somewhere in between.

Also, with very small displays, we are usually ready to accept smaller
size even at the expense of some legibility, because, well, the display
being tiny, it just can't be helped...

The bottom line is that finding the effective resolution involves many
parameters, which all have nonlinear effects. Yet the result is a
relatively simple two parameter function on display size and resolution.
Starting with a number of data points found by experiment, it should be
possible to fit a function that pretty well models typical user's
expectations about effective resolution.

Now what to do with this effective resolution? I'm quite ambivalent
about the suggestion to redefine DPI. On one hand, it just feels wrong;
it must result in even more breakage and confusion... It surely would be
more elegant to change applications to explicitely use effective
resolution, or a scaling factor on top of the physical DPI.

>From a pragmatical point though it doesn't sound like a terribly bad
idea. The truth is that the "everthing is 96 DPI" hack is often
considered a good thing, because such a fixed resolution is actually
closer to the effective resolution than the physical resolution in many
important real-world use cases. (60" projector or large TV, tiny 200-300
DPI devices.)

It would still be a hack, but unlike the fixed resolution one, it would
actually take into account the true resolution -- in a manner that meets
users' expectations fairly well, unlike just scaling to the physical
resolution no matter what. It would Just Work (TM) with those
applications that already scale depending on DPI. I wonder how much
support this approach could gain?

Of course, this will break those few applications (dealing with print
documents) that actually care about the real physical size. As pointed
out by others, these applications will need to be adapted anyways to use
a special interface for querying the real physical resolution, because
of multihead... And also, it's much more realistic to adapt those few
apps rather than all the others. (Especially if for most practical
purposes the starting point is the 96 DPI hack, so with the current way
of handling things they don't really work in most cases anyways...)

-antrik-



More information about the xorg mailing list