4K at 60 YCbCr420 missing mode in usermode

Emil Velikov emil.l.velikov at gmail.com
Wed Jun 27 09:39:28 UTC 2018


On 27 June 2018 at 09:40, Michel Dänzer <michel at daenzer.net> wrote:
> On 2018-06-26 07:11 PM, Emil Velikov wrote:
>> On 26 June 2018 at 17:23, Michel Dänzer <michel at daenzer.net> wrote:
>>> On 2018-06-26 05:43 PM, Emil Velikov wrote:
>>>> On 25 June 2018 at 22:45, Zuo, Jerry <Jerry.Zuo at amd.com> wrote:
>>>>> Hello all:
>>>>>
>>>>>
>>>>>
>>>>> We are working on an issue affecting 4K at 60 HDMI display not to light up, but
>>>>> only showing up 4K at 30 from:
>>>>> https://bugs.freedesktop.org/show_bug.cgi?id=106959 and others.
>>>>>
>>>>>
>>>>>
>>>>> Some displays (e.g., ASUS PA328) HDMI port shows YCbCr420 CEA extension
>>>>> block with 4K at 60 supported. Such HDMI 4K at 60 is not real HDMI 2.0, but still
>>>>> following HDMI 1.4 spec. with maximum TMDS clock of 300MHz instead of
>>>>> 600MHz.
>>>>>
>>>>> To get such 4K at 60 supported, it needs to limit the bandwidth by reducing the
>>>>> color space to YCbCr420 only. We’ve already raised YCbCr420 only flag
>>>>> (attached patch) from kernel side to pass the mode validation, and expose it
>>>>> to user space.
>>>>>
>>>>>
>>>>>
>>>>>     We think that one of the issues that causes this problem is due to
>>>>> usermode pruning the 4K at 60 mode from the modelist (attached Xorg.0.log). It
>>>>> seems like when usermode receives all the modes, it doesn't take in account
>>>>> the 4K at 60 YCbCr4:2:0 specific mode. In order to pass validation of being
>>>>> added to usermode modelist, its pixel clk needs to be divided by 2 so that
>>>>> it won't exceed TMDS max physical pixel clk (300MHz). That might explain the
>>>>> difference in modes between our usermode and modeset.
>>>>>
>>>>>
>>>>>
>>>>>     Such YCbCr4:2:0 4K at 60 special mode is marked in DRM by raising a flag
>>>>> (y420_vdb_modes) inside connector's display_info which can be seen in
>>>>> do_y420vdb_modes(). Usermode could rely on that flag to pick up such mode
>>>>> and halve the required pclk to prevent such mode getting pruned out.
>>>>>
>>>>>
>>>>>
>>>>> We were hoping for someone helps to look at it from usermode perspective.
>>>>> Thanks a lot.
>>>>>
>>>> Just some observations, while going through some coffee. Take them
>>>> with a pinch of salt.
>>>>
>>>> Currently the kernel edid parser (in drm core) handles the
>>>> EXT_VIDEO_DATA_BLOCK_420 extended block.
>>>> Additionally, the kernel allows such modes only as the (per connector)
>>>> ycbcr_420_allowed bool is set by the driver.
>>>>
>>>> Quick look shows that it's only enabled by i915 on gen10 && geminilake hardware.
>>>>
>>>> At the same time, X does it's own fairly partial edid parsing and
>>>> doesn't handle any(?) extended blocks.
>>>>
>>>> One solution is to update the X parser, although it seems like a
>>>> endless game of cat and mouse.
>>>> IMHO a much better approach is to not use edid codepaths for KMS
>>>> drivers (where AMDGPU is one).
>>>> On those, the supported modes is advertised by the kernel module via
>>>> drmModeGetConnector.
>>>
>>> We are getting the modes from the kernel; the issue is they are then
>>> pruned (presumably by xf86ProbeOutputModes => xf86ValidateModesClocks)
>>> due to violating the clock limits, as described by Jerry above.
>>>
>> Might have been too brief there. Here goes a more elaborate
>> suggestion, please point out any misunderstandings.
>>
>> If we look into the drivers we'll see a call to xf86InterpretEDID(),
>> followed by xf86OutputSetEDID().
>> The former doing a partial parsing of the edid, creating a xf86MonPtr
>> (timings information et al.) with the latter attaching it to the
>> output.
>>
>> Thus as we get into xf86ProbeOutputModes/xf86ValidateModesClocks the
>> Xserver probes the mode against given timing/bandwidth constrains,
>> discarding it where applicable.
>>
>> Considering that the DRM driver already does similar checks, X could
>> side-step the parsing and filtering/validation all together.
>> Trusting the kernel should be reasonable, considering weston (and I
>> would imagine other wayland compositors) already do so.
>
> It's still not clear to me what exactly you're proposing. Maybe you can
> whip up at least a mock-up patch?
>
>
I don't have much time to tinker with it, hopefully the following
proposal will be clear enough. If not perhaps I'll get to in at some
point.

Step 1)
Since xf86InterpretEDID/xf86OutputSetEDID are used by both KMS and UMS
drivers one will need another set of functions.
The former parsing only the required info of the edid - ideally zero
modeset details, into a say into a new struct. With the latter
attaching the new data to the output.

Update Xserver to use the data produced by the new functions, falling
back the old one.

Step 2)
Any Xserver functions that do mode validation (for example
xf86ProbeOutputModes) become mostly a no-op if the mode comes from the
kernel.
Basically no max_clock/timings adjustments, no *Validate* calls.

Modes added manually via xorg.conf would still need to go through all
the tweaks (tmds_freq > max_clock, nHsync == 0, etc) and validation
performed.

Step 3)
Update KMS driver(s) to use the xf86InterpretEDID/xf86OutputSetEDID replacement.


>> Obviously, manually added modelines (via a config file) would still
>> need to be validated.
>
> How would that be done? Does the kernel provide functionality for this?
>
The exact same way X is doing it currently. I hope the above sheds
more light on the topic.

HTH
Emil


More information about the xorg-devel mailing list