[PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.

Zhou, Peng Ju PengJu.Zhou at amd.com
Mon Jul 19 03:35:39 UTC 2021


[AMD Official Use Only]

Hi Leo
Can you help to review this patch?


---------------------------------------------------------------------- 
BW
Pengju Zhou




> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces at lists.freedesktop.org> On Behalf Of Zhou,
> Peng Ju
> Sent: Friday, July 16, 2021 10:15 AM
> To: Liu, Monk <Monk.Liu at amd.com>; Alex Deucher
> <alexdeucher at gmail.com>; Liu, Leo <Leo.Liu at amd.com>
> Cc: amd-gfx list <amd-gfx at lists.freedesktop.org>
> Subject: RE: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
> 
> [AMD Official Use Only]
> 
> Hi @Liu, Leo
> 
> Can you help to review this patch?
> Monk and Alex have reviewed it.
> 
> 
> ----------------------------------------------------------------------
> BW
> Pengju Zhou
> 
> 
> 
> > -----Original Message-----
> > From: Liu, Monk <Monk.Liu at amd.com>
> > Sent: Thursday, July 15, 2021 7:54 AM
> > To: Alex Deucher <alexdeucher at gmail.com>; Zhou, Peng Ju
> > <PengJu.Zhou at amd.com>; Liu, Leo <Leo.Liu at amd.com>
> > Cc: amd-gfx list <amd-gfx at lists.freedesktop.org>
> > Subject: RE: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
> >
> > [AMD Official Use Only]
> >
> > Reviewed-by: Monk Liu <monk.liu at amd.com>
> >
> > You might need @Liu, Leo's review as well
> >
> > Thanks
> >
> > ------------------------------------------
> > Monk Liu | Cloud-GPU Core team
> > ------------------------------------------
> >
> > -----Original Message-----
> > From: amd-gfx <amd-gfx-bounces at lists.freedesktop.org> On Behalf Of Alex
> > Deucher
> > Sent: Wednesday, July 14, 2021 10:49 PM
> > To: Zhou, Peng Ju <PengJu.Zhou at amd.com>
> > Cc: amd-gfx list <amd-gfx at lists.freedesktop.org>
> > Subject: Re: [PATCH v2] drm/amd/amdgpu: Recovery vcn instance iterate.
> >
> > On Tue, Jul 13, 2021 at 6:31 AM Peng Ju Zhou <PengJu.Zhou at amd.com>
> wrote:
> > >
> > > The previous logic is recording the amount of valid vcn instances to
> > > use them on SRIOV, it is a hard task due to the vcn accessment is
> > > based on the index of the vcn instance.
> > >
> > > Check if the vcn instance enabled before do instance init.
> > >
> > > Signed-off-by: Peng Ju Zhou <PengJu.Zhou at amd.com>
> >
> > Acked-by: Alex Deucher <alexander.deucher at amd.com>
> >
> > > ---
> > >  drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 33
> > > ++++++++++++++++-----------
> > >  1 file changed, 20 insertions(+), 13 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > > b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > > index c3580de3ea9c..d11fea2c9d90 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > > @@ -88,9 +88,7 @@ static int vcn_v3_0_early_init(void *handle)
> > >         int i;
> > >
> > >         if (amdgpu_sriov_vf(adev)) {
> > > -               for (i = 0; i < VCN_INSTANCES_SIENNA_CICHLID; i++)
> > > -                       if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, i))
> > > -                               adev->vcn.num_vcn_inst++;
> > > +               adev->vcn.num_vcn_inst = VCN_INSTANCES_SIENNA_CICHLID;
> > >                 adev->vcn.harvest_config = 0;
> > >                 adev->vcn.num_enc_rings = 1;
> > >
> > > @@ -151,8 +149,7 @@ static int vcn_v3_0_sw_init(void *handle)
> > >                 adev->firmware.fw_size +=
> > >                         ALIGN(le32_to_cpu(hdr->ucode_size_bytes),
> > > PAGE_SIZE);
> > >
> > > -               if ((adev->vcn.num_vcn_inst ==
> VCN_INSTANCES_SIENNA_CICHLID)
> > ||
> > > -                   (amdgpu_sriov_vf(adev) && adev->asic_type ==
> > CHIP_SIENNA_CICHLID)) {
> > > +               if (adev->vcn.num_vcn_inst ==
> > > + VCN_INSTANCES_SIENNA_CICHLID) {
> > >                         adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].ucode_id
> =
> > AMDGPU_UCODE_ID_VCN1;
> > >                         adev->firmware.ucode[AMDGPU_UCODE_ID_VCN1].fw =
> adev-
> > >vcn.fw;
> > >                         adev->firmware.fw_size += @@ -322,18 +319,28
> > > @@ static int vcn_v3_0_hw_init(void *handle)
> > >                                 continue;
> > >
> > >                         ring = &adev->vcn.inst[i].ring_dec;
> > > -                       ring->wptr = 0;
> > > -                       ring->wptr_old = 0;
> > > -                       vcn_v3_0_dec_ring_set_wptr(ring);
> > > -                       ring->sched.ready = true;
> > > -
> > > -                       for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> > > -                               ring = &adev->vcn.inst[i].ring_enc[j];
> > > +                       if (amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING,
> i))
> > {
> > > +                               ring->sched.ready = false;
> > > +                               dev_info(adev->dev, "ring %s is disabled by
> hypervisor\n",
> > ring->name);
> > > +                       } else {
> > >                                 ring->wptr = 0;
> > >                                 ring->wptr_old = 0;
> > > -                               vcn_v3_0_enc_ring_set_wptr(ring);
> > > +                               vcn_v3_0_dec_ring_set_wptr(ring);
> > >                                 ring->sched.ready = true;
> > >                         }
> > > +
> > > +                       for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> > > +                               ring = &adev->vcn.inst[i].ring_enc[j];
> > > +                               if (amdgpu_vcn_is_disabled_vcn(adev,
> > VCN_ENCODE_RING, i)) {
> > > +                                       ring->sched.ready = false;
> > > +                                       dev_info(adev->dev, "ring %s is disabled by
> > hypervisor\n", ring->name);
> > > +                               } else {
> > > +                                       ring->wptr = 0;
> > > +                                       ring->wptr_old = 0;
> > > +                                       vcn_v3_0_enc_ring_set_wptr(ring);
> > > +                                       ring->sched.ready = true;
> > > +                               }
> > > +                       }
> > >                 }
> > >         } else {
> > >                 for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
> > > --
> > > 2.17.1
> > >
> > > _______________________________________________
> > > amd-gfx mailing list
> > > amd-gfx at lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> > > s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-
> > gfx&data=04%7C01%7Cmo
> > >
> >
> nk.liu%40amd.com%7Ceee0db55446b43f11a5d08d946d69bda%7C3dd8961fe4
> > 884e60
> > >
> >
> 8e11a82d994e183d%7C0%7C0%7C637618709836027968%7CUnknown%7CTW
> > FpbGZsb3d8
> > >
> >
> eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D
> > %7C1
> > >
> >
> 000&sdata=0lw4us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D
> > &res
> > > erved=0
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx at lists.freedesktop.org
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.fr
> > eedesktop.org%2Fmailman%2Flistinfo%2Famd-
> >
> gfx&data=04%7C01%7Cmonk.liu%40amd.com%7Ceee0db55446b43f11a5
> >
> d08d946d69bda%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637
> >
> 618709836027968%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiL
> >
> CJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=0lw4
> > us%2FTz66cgN6I0kQSwQDQzYRKfa2VuSBemqDMhcs%3D&reserved=0
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.fr
> eedesktop.org%2Fmailman%2Flistinfo%2Famd-
> gfx&data=04%7C01%7CPengju.Zhou%40amd.com%7C954d8355dd564f7a
> 5d4c08d947ff8051%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C6
> 37619984967610886%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDA
> iLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Rjt
> b0ufSZEt%2B6GFS%2FaXK4YA%2FbNeXr7ptGyFPBLut994%3D&reserved=0


More information about the amd-gfx mailing list