[PATCH 09/26] drm/radeon: add biggest hole tracking and wakequeue to the sa v3
Jerome Glisse
j.glisse at gmail.com
Tue May 1 08:24:20 PDT 2012
On Tue, May 1, 2012 at 8:22 AM, Christian König <deathsimple at vodafone.de> wrote:
> On 30.04.2012 17:45, Jerome Glisse wrote:
>>
>> On Mon, Apr 30, 2012 at 10:50 AM, Christian König
>> <deathsimple at vodafone.de> wrote:
>>>
>>> With that in place clients are automatically blocking
>>> until their memory request can be handled.
>>>
>>> v2: block only if the memory request can't be satisfied
>>> in the first try, the first version actually lacked
>>> a night of sleep.
>>>
>>> v3: make blocking optional, update comments and fix
>>> another bug with biggest hole tracking.
>>>
>>> Signed-off-by: Christian König<deathsimple at vodafone.de>
>>
>> I would say NAK, the idea of SA allocator is to be like ring buffer, a
>> ring allocation. All the allocation made through SA are made for
>> object that life are ring related, ie once allocated the object is
>> scheduled through one of the ring and once the ring is done consuming
>> it the object is free. So the idea behind the SA allocator is to make
>> it simple we keep track of the highest offset used and the lowest one
>> free, we try to find room above the highest and if we don't have
>> enough room we try at the begining of the ring (wrap over), idea is
>> that in most scenario we don't have to wait because SA is big enough
>> and if we have to wait well it's because GPU is full of things to do
>> so waiting a bit won't starve the GPU.
>
> Take a closer look, that's exactly what the new code intends to do.
>
> The biggest_hole is tracking the bottom of the ring allocation algorithm:
>
> In (roughly estimated) 97% of all cases it is pointing after the last
> allocation, so new allocations can be directly served.
>
> In 1% of the cases it is pointing to a free block at the end of the buffer
> that isn't big enough to serve the next allocation, in that case we just
> need to make one step to the beginning of the buffer and (assuming that
> allocations are ring like) there should be enough space gathered to serve
> the allocation.
>
> In 1% of the cases a left over allocation will prevent a new biggest_hole to
> gather together at the beginning of the buffer, but that is actually
> unproblematic, cause all that needs to be done is stepping over that
> unreleased chunk of memory.
>
> And well every algorithm has a worst case and here it happens when there
> isn't enough room to server the allocation, in this case we need to do a
> full cycle around all allocations and then optionally wait for the
> biggest_hole to increase in size.
>
>
>> That being said current code doesn't exactly reflect that, i can't
>> remember why. Anyway i would rather make current looks like intented
>> as i believe it make allocation easy and fast in usual case. There is
>> also a reason why i intended to have several sa manager. Idea is to
>> have one SA manager for the VM (because VM might last longer) and one
>> for all the ring object (semaphore, ib, ...) because usual case is
>> that all ring sync from time to time and the all progress in //.
>
> Yeah, even with your good intentions the current code just turned out to be
> a linear search over all the allocations and that seemed to be not so
> efficient. Also making the code look like it should from the comments
> description was also part of my intentions.
>
> Anyway, I think it's definitely an improvement, but there obviously is also
> still room for new ideas, e.g. we could track how trustworthy the
> biggest_hole is and then early abort allocations that can never succeed. Or
> we could also keep track of the last allocation and on new allocations try
> once to place them directly behind the last allocation first, that would
> give the whole allocator a even more ring like fashion.
>
> And by the way: Having two allocators, one for VM and another for the ring
> like stuff sounds like the right thing to do, since their allocations seems
> to have different live cycles.
>
> Christian.
This one is still NAK, i have read again, i am gonna post a simpler
version shortly.
Cheers,
Jerome
More information about the dri-devel
mailing list