Plumbing explicit synchronization through the Linux ecosystem
Laurent Pinchart
laurent.pinchart at ideasonboard.com
Mon Mar 16 13:01:25 UTC 2020
Hi Tomek,
On Mon, Mar 16, 2020 at 12:55:27PM +0000, Tomek Bury wrote:
> Hi Jason,
>
> I've been wrestling with the sync problems in Wayland some time ago, but only
> with regards to 3D drivers.
>
> The guarantee given by the GL/GLES spec is limited to a single graphics
> context. If the same buffer is accessed by 2 contexts the outcome is
> unspecified. The cross-context and cross-process synchronisation is not
> guaranteed. It happens to work on Mesa, because the read/write locking is
> implemented in the kernel space, but it didn't work on Broadcom driver, which
> has read-write interlocks in user space.
>
> A Vulkan client makes it even worse because of conflicting requirements:
> Vulkan's vkQueuePresentKHR() passes in a number of semaphores but disallows
> waiting. Wayland WSI requires wl_surface_commit() to be called from
> vkQueuePresentKHR() which does require a wait, unless a synchronisation
> primitive representing Vulkan samaphores is passed between Vulkan client and
> the compositor.
>
> The most troublesome part was Wayland buffer release mechanism, as it only
> involves a CPU signalling over Wayland IPC, without any 3D driver involvement.
> The choices were: explicit synchronisation extension or a buffer copy in the
> compositor (i.e. compositor textures from the copy, so the client can re-write
> the original), or some implicit synchronisation in kernel space (but that
> wasn't an option in Broadcom driver).
>
> With regards to V4L2, I believe it could easily work the same way as 3D
> drivers, i.e. pass a buffer+fence pair to the next stage. The encode always
> succeeds, but for capture or decode, the main problem is the uncertain outcome,
> I believe? If we're fine with rendering or displaying an occasional broken
> frame, then buffer+fence pair would work too. The broken frame will go into the
> pipeline, but application can drain the pipeline and start over once the
> capture works again.
>
> To answer some points raised by Laurent (although I'm unfamiliar with the
> camera drivers):
>
> > you don't know until capture complete in which buffer the frame has
> > been captured
>
> Surely you do, you only don't know in advance if the capture will be successful
You do in kernelspace, but not in userspace at the moment, due to buffer
recycling.
> > but if an error occurs during capture, they can be recycled internally and
> > put to the back of the queue.
>
> That would have to change in order to use explicit synchronisation. Every
> started capture becomes immediately available as a buffer+fence pair. Fence is
> signalled once the capture is finished (successfully or otherwise). The buffer
> must not be reused until it's released, possibly with another fence - in that
> case the buffer must not be reused until the release fence is signalled.
We could certainly change this at least in some cases, but it would
break existing userspace that doesn't expect incorrect frames.
I'm however not sure we could change this behaviour in every case, there
may be hardware that can't provide a guarantee on the order in which
buffers will be used. I'm aware this wouldn't be compatible with
explicit synchronization, and that's my point: camera hardware may not
always support explicit synchronization. As long as we can fall back to
not using fences then we should be fine.
--
Regards,
Laurent Pinchart
More information about the xorg-devel
mailing list