A question on new mode validation code

Henry Zhao henry.zhao at sun.com
Thu Apr 26 13:30:46 PDT 2007

In the new code for mode validation released in 7.2, (in xf86Mode.c)
this portion of code is used for shrinking virtual size if necessary
after mode pool is created:

  * If we estimated the virtual size above, we may have filtered away all
  * the modes that maximally match that size; scan again to find out and
  * fix up if so.
  if (inferred_virtual) {
      int vx = 0, vy = 0;
      for (p = scrp->modes; p; p = p->next) {
          if (p->HDisplay > vx && p->VDisplay > vy) {
              vx = p->HDisplay;
              vy = p->VDisplay;
      if (vx < virtX || vy < virtY) {
          xf86DrvMsg(scrp->scrnIndex, X_WARNING,
                 "Shrinking virtual size estimate from %dx%d to %dx%d\n",
                   virtX, virtY, vx, vy);
          virtX = vx;
          virtY = vy;
          linePitch = miScanLineWidth(vx, vy, linePitch, apertureSize,
                                         BankFormat, pitchInc);

Is there any reason, in re-calculating linePitch, the old linePitch,
a value corresponding virutual size before shrinking, is used instead
of minPitch ? We saw an example, where virtual size before shrinking
is 1600x1200, after shrinking is 1280x1024, but the new linePitch is
still 1600 which causes garbled screen. The problem could be solved
using minPitch.


More information about the xorg mailing list