[01/25] drm/msm/dpu: fix hw ctl retrieval for mixer muxing

Submitted by Jeykumar Sankaran on Oct. 9, 2018, 4:27 a.m.

Details

Message ID 1539059262-8326-2-git-send-email-jsanka@codeaurora.org
State New
Headers show
Series "reserve RM resources in CRTC state" ( rev: 1 ) in DRI devel

Not browsing as part of any series.

Commit Message

Jeykumar Sankaran Oct. 9, 2018, 4:27 a.m.
Layer mixer/pingpong block counts and hw ctl block counts
will not be same for all the topologies (e.g. layer
mixer muxing to single interface)

Use the encoder's split_role info to retrieve the
respective control path for programming.

Signed-off-by: Jeykumar Sankaran <jsanka@codeaurora.org>
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Patch hide | download patch | download mbox

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index 96cdf06..d12f896 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -1060,6 +1060,7 @@  static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
 
 	for (i = 0; i < dpu_enc->num_phys_encs; i++) {
 		struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
+		int ctl_index;
 
 		if (phys) {
 			if (!dpu_enc->hw_pp[i]) {
@@ -1068,14 +1069,16 @@  static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
 				return;
 			}
 
-			if (!hw_ctl[i]) {
+			ctl_index = phys->split_role == ENC_ROLE_SLAVE ? 1 : 0;
+
+			if (!hw_ctl[ctl_index]) {
 				DPU_ERROR_ENC(dpu_enc, "no ctl block assigned"
-					     "at idx: %d\n", i);
+					     "at idx: %d\n", ctl_index);
 				return;
 			}
 
 			phys->hw_pp = dpu_enc->hw_pp[i];
-			phys->hw_ctl = hw_ctl[i];
+			phys->hw_ctl = hw_ctl[ctl_index];
 
 			phys->connector = conn->state->connector;
 			if (phys->ops.mode_set)

Comments

On Mon, Oct 08, 2018 at 09:27:18PM -0700, Jeykumar Sankaran wrote:
> Layer mixer/pingpong block counts and hw ctl block counts
> will not be same for all the topologies (e.g. layer
> mixer muxing to single interface)
> 
> Use the encoder's split_role info to retrieve the
> respective control path for programming.
> 
> Signed-off-by: Jeykumar Sankaran <jsanka@codeaurora.org>
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> index 96cdf06..d12f896 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> @@ -1060,6 +1060,7 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
>  
>  	for (i = 0; i < dpu_enc->num_phys_encs; i++) {
>  		struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
> +		int ctl_index;
>  
>  		if (phys) {
>  			if (!dpu_enc->hw_pp[i]) {
> @@ -1068,14 +1069,16 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
>  				return;
>  			}
>  
> -			if (!hw_ctl[i]) {
> +			ctl_index = phys->split_role == ENC_ROLE_SLAVE ? 1 : 0;
> +
> +			if (!hw_ctl[ctl_index]) {
>  				DPU_ERROR_ENC(dpu_enc, "no ctl block assigned"
> -					     "at idx: %d\n", i);
> +					     "at idx: %d\n", ctl_index);

I know you are only updating the previous code but we shouldn't be splitting the
string here for grep purposes.

>  				return;
>  			}
>  
>  			phys->hw_pp = dpu_enc->hw_pp[i];
> -			phys->hw_ctl = hw_ctl[i];
> +			phys->hw_ctl = hw_ctl[ctl_index];
>  
>  			phys->connector = conn->state->connector;
>  			if (phys->ops.mode_set)
On Mon, Oct 08, 2018 at 09:27:18PM -0700, Jeykumar Sankaran wrote:
> Layer mixer/pingpong block counts and hw ctl block counts
> will not be same for all the topologies (e.g. layer
> mixer muxing to single interface)
> 
> Use the encoder's split_role info to retrieve the
> respective control path for programming.
> 
> Signed-off-by: Jeykumar Sankaran <jsanka@codeaurora.org>
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> index 96cdf06..d12f896 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> @@ -1060,6 +1060,7 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
>  
>  	for (i = 0; i < dpu_enc->num_phys_encs; i++) {
>  		struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
> +		int ctl_index;
>  
>  		if (phys) {
>  			if (!dpu_enc->hw_pp[i]) {
> @@ -1068,14 +1069,16 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
>  				return;
>  			}
>  
> -			if (!hw_ctl[i]) {
> +			ctl_index = phys->split_role == ENC_ROLE_SLAVE ? 1 : 0;
> +

What if MAX_CHANNELS_PER_ENC isn't 2? Similarly, what if num_phys_encs >
MAX_CHANNELS_PER_ENC? It seems like there should be a more formal relationship
between all of these verious values (num_of_h_tiles assumed to be <= 2 as well).
If one of them changes beyond the assumed bound, the rest of the driver falls
over pretty hard.


> +			if (!hw_ctl[ctl_index]) {
>  				DPU_ERROR_ENC(dpu_enc, "no ctl block assigned"
> -					     "at idx: %d\n", i);
> +					     "at idx: %d\n", ctl_index);
>  				return;

When you return on error here, should you give back the resources that you've
already provisioned?

>  			}
>  
>  			phys->hw_pp = dpu_enc->hw_pp[i];
> -			phys->hw_ctl = hw_ctl[i];
> +			phys->hw_ctl = hw_ctl[ctl_index];
>  
>  			phys->connector = conn->state->connector;
>  			if (phys->ops.mode_set)
> -- 
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 
> _______________________________________________
> Freedreno mailing list
> Freedreno@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/freedreno
On 2018-10-09 11:07, Sean Paul wrote:
> On Mon, Oct 08, 2018 at 09:27:18PM -0700, Jeykumar Sankaran wrote:
>> Layer mixer/pingpong block counts and hw ctl block counts
>> will not be same for all the topologies (e.g. layer
>> mixer muxing to single interface)
>> 
>> Use the encoder's split_role info to retrieve the
>> respective control path for programming.
>> 
>> Signed-off-by: Jeykumar Sankaran <jsanka@codeaurora.org>
>> ---
>>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 9 ++++++---
>>  1 file changed, 6 insertions(+), 3 deletions(-)
>> 
>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> index 96cdf06..d12f896 100644
>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> @@ -1060,6 +1060,7 @@ static void dpu_encoder_virt_mode_set(struct
> drm_encoder *drm_enc,
>> 
>>  	for (i = 0; i < dpu_enc->num_phys_encs; i++) {
>>  		struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
>> +		int ctl_index;
>> 
>>  		if (phys) {
>>  			if (!dpu_enc->hw_pp[i]) {
>> @@ -1068,14 +1069,16 @@ static void dpu_encoder_virt_mode_set(struct
> drm_encoder *drm_enc,
>>  				return;
>>  			}
>> 
>> -			if (!hw_ctl[i]) {
>> +			ctl_index = phys->split_role == ENC_ROLE_SLAVE ? 1
> : 0;
>> +
> 
> What if MAX_CHANNELS_PER_ENC isn't 2? Similarly, what if num_phys_encs 
> >
> MAX_CHANNELS_PER_ENC? It seems like there should be a more formal
> relationship
> between all of these verious values (num_of_h_tiles assumed to be <= 2 
> as
> well).
> If one of them changes beyond the assumed bound, the rest of the driver
> falls
> over pretty hard.
> 
MAX_CHANNELS_PER_ENC is set to 2 to represent HW limitation on the 
chipset as
we cannot gang up more than 2 LM chain to an interface. Supporting more 
than 2
might demand much larger changes than validating for boundaries.

num_phys_enc is the max no of phys encoders we create as we are looping 
through
num_of_h_tiles which cannot be more than priv->dsi array size.

So its very unlikely we would expect these loops to go out of bound!

Thanks,
Jeykumar S.
> 
>> +			if (!hw_ctl[ctl_index]) {
>>  				DPU_ERROR_ENC(dpu_enc, "no ctl block
> assigned"
>> -					     "at idx: %d\n", i);
>> +					     "at idx: %d\n", ctl_index);
>>  				return;
> 
> When you return on error here, should you give back the resources that
> you've
> already provisioned?
> 
>>  			}
>> 
>>  			phys->hw_pp = dpu_enc->hw_pp[i];
>> -			phys->hw_ctl = hw_ctl[i];
>> +			phys->hw_ctl = hw_ctl[ctl_index];
>> 
>>  			phys->connector = conn->state->connector;
>>  			if (phys->ops.mode_set)
>> --
>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
> Forum,
>> a Linux Foundation Collaborative Project
>> 
>> _______________________________________________
>> Freedreno mailing list
>> Freedreno@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/freedreno
On Tue, Oct 09, 2018 at 10:46:41PM -0700, Jeykumar Sankaran wrote:
> On 2018-10-09 11:07, Sean Paul wrote:
> > On Mon, Oct 08, 2018 at 09:27:18PM -0700, Jeykumar Sankaran wrote:
> > > Layer mixer/pingpong block counts and hw ctl block counts
> > > will not be same for all the topologies (e.g. layer
> > > mixer muxing to single interface)
> > > 
> > > Use the encoder's split_role info to retrieve the
> > > respective control path for programming.
> > > 
> > > Signed-off-by: Jeykumar Sankaran <jsanka@codeaurora.org>
> > > ---
> > >  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 9 ++++++---
> > >  1 file changed, 6 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> > b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> > > index 96cdf06..d12f896 100644
> > > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> > > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> > > @@ -1060,6 +1060,7 @@ static void dpu_encoder_virt_mode_set(struct
> > drm_encoder *drm_enc,
> > > 
> > >  	for (i = 0; i < dpu_enc->num_phys_encs; i++) {
> > >  		struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
> > > +		int ctl_index;
> > > 
> > >  		if (phys) {
> > >  			if (!dpu_enc->hw_pp[i]) {
> > > @@ -1068,14 +1069,16 @@ static void dpu_encoder_virt_mode_set(struct
> > drm_encoder *drm_enc,
> > >  				return;
> > >  			}
> > > 
> > > -			if (!hw_ctl[i]) {
> > > +			ctl_index = phys->split_role == ENC_ROLE_SLAVE ? 1
> > : 0;
> > > +
> > 
> > What if MAX_CHANNELS_PER_ENC isn't 2? Similarly, what if num_phys_encs >
> > MAX_CHANNELS_PER_ENC? It seems like there should be a more formal
> > relationship
> > between all of these verious values (num_of_h_tiles assumed to be <= 2
> > as
> > well).
> > If one of them changes beyond the assumed bound, the rest of the driver
> > falls
> > over pretty hard.
> > 
> MAX_CHANNELS_PER_ENC is set to 2 to represent HW limitation on the chipset
> as
> we cannot gang up more than 2 LM chain to an interface. Supporting more than
> 2
> might demand much larger changes than validating for boundaries.
> 
> num_phys_enc is the max no of phys encoders we create as we are looping
> through
> num_of_h_tiles which cannot be more than priv->dsi array size.
> 
> So its very unlikely we would expect these loops to go out of bound!

For now, sure. However a new revision of hardware will be a pain to add support
for if we add more assumptions, and secondly it makes it _really_ hard to
understand the code if you don't have Qualcomm employee-level access to the
hardware design :).

So this is why I'm advocating for the reduction of the number of "num_of_"
values we assume are all in the same range. It's a lot easier to understand the
hardware when you can see that a phys encoder is needed per h tile, and that a
ctl/pp is needed per phys encoder.

Anyways, just my $0.02.

Sean

> 
> Thanks,
> Jeykumar S.
> > 
> > > +			if (!hw_ctl[ctl_index]) {
> > >  				DPU_ERROR_ENC(dpu_enc, "no ctl block
> > assigned"
> > > -					     "at idx: %d\n", i);
> > > +					     "at idx: %d\n", ctl_index);
> > >  				return;
> > 
> > When you return on error here, should you give back the resources that
> > you've
> > already provisioned?
> > 
> > >  			}
> > > 
> > >  			phys->hw_pp = dpu_enc->hw_pp[i];
> > > -			phys->hw_ctl = hw_ctl[i];
> > > +			phys->hw_ctl = hw_ctl[ctl_index];
> > > 
> > >  			phys->connector = conn->state->connector;
> > >  			if (phys->ops.mode_set)
> > > --
> > > The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
> > Forum,
> > > a Linux Foundation Collaborative Project
> > > 
> > > _______________________________________________
> > > Freedreno mailing list
> > > Freedreno@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/freedreno
> 
> -- 
> Jeykumar S
On 2018-10-10 07:29, Sean Paul wrote:
> On Tue, Oct 09, 2018 at 10:46:41PM -0700, Jeykumar Sankaran wrote:
>> On 2018-10-09 11:07, Sean Paul wrote:
>> > On Mon, Oct 08, 2018 at 09:27:18PM -0700, Jeykumar Sankaran wrote:
>> > > Layer mixer/pingpong block counts and hw ctl block counts
>> > > will not be same for all the topologies (e.g. layer
>> > > mixer muxing to single interface)
>> > >
>> > > Use the encoder's split_role info to retrieve the
>> > > respective control path for programming.
>> > >
>> > > Signed-off-by: Jeykumar Sankaran <jsanka@codeaurora.org>
>> > > ---
>> > >  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 9 ++++++---
>> > >  1 file changed, 6 insertions(+), 3 deletions(-)
>> > >
>> > > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> > b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> > > index 96cdf06..d12f896 100644
>> > > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> > > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> > > @@ -1060,6 +1060,7 @@ static void dpu_encoder_virt_mode_set(struct
>> > drm_encoder *drm_enc,
>> > >
>> > >  	for (i = 0; i < dpu_enc->num_phys_encs; i++) {
>> > >  		struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
>> > > +		int ctl_index;
>> > >
>> > >  		if (phys) {
>> > >  			if (!dpu_enc->hw_pp[i]) {
>> > > @@ -1068,14 +1069,16 @@ static void dpu_encoder_virt_mode_set(struct
>> > drm_encoder *drm_enc,
>> > >  				return;
>> > >  			}
>> > >
>> > > -			if (!hw_ctl[i]) {
>> > > +			ctl_index = phys->split_role == ENC_ROLE_SLAVE ? 1
>> > : 0;
>> > > +
>> >
>> > What if MAX_CHANNELS_PER_ENC isn't 2? Similarly, what if num_phys_encs
>> 
>> > MAX_CHANNELS_PER_ENC? It seems like there should be a more formal
>> > relationship
>> > between all of these verious values (num_of_h_tiles assumed to be <= 2
>> > as
>> > well).
>> > If one of them changes beyond the assumed bound, the rest of the
> driver
>> > falls
>> > over pretty hard.
>> >
>> MAX_CHANNELS_PER_ENC is set to 2 to represent HW limitation on the
> chipset
>> as
>> we cannot gang up more than 2 LM chain to an interface. Supporting 
>> more
> than
>> 2
>> might demand much larger changes than validating for boundaries.
>> 
>> num_phys_enc is the max no of phys encoders we create as we are 
>> looping
>> through
>> num_of_h_tiles which cannot be more than priv->dsi array size.
>> 
>> So its very unlikely we would expect these loops to go out of bound!
> 
> For now, sure. However a new revision of hardware will be a pain to add
> support
> for if we add more assumptions, and secondly it makes it _really_ hard 
> to
> understand the code if you don't have Qualcomm employee-level access to
> the
> hardware design :).
> 
I am having a hard time understanding why you have to see these counts 
as
"assumptions".

Except for MAX_CHANNELS_PER_ENC, all the other counts are either 
calculated
or derived from the other modules linked to the topology.

h_tiles is the drm_connector terminology which represents the number of 
panels
the display is driving. We use this information to determine the HW
block chains in the MDP. HW blocks counts (pp or ctl) need not be same
as the h_tile count to replace them with.

I believe maintaining the counts independently at each layer allows us 
to have more
flexibility to support independent HW chaining for future revisions.

Would it be more convincing if I get the MAX_CHANNELS_PER_ENC value from 
catalog.c?

> So this is why I'm advocating for the reduction of the number of 
> "num_of_"
> values we assume are all in the same range. It's a lot easier to
> understand the
> hardware when you can see that a phys encoder is needed per h tile, and
> that a
> ctl/pp is needed per phys encoder.
This is exactly the idea I don't want to convey to the reader. For the 
LM merge path,
each phys encoder will not be having its own control. Based on the 
topology we
are supporting, HW block counts can vary. We can even drive:
- 2 interfaces with 1 ctl and 1 ping pong
- 1 interface with 1 ctl and 2 ping pongs
- 1 interface with 1 ctl and 1 ping pong

Thanks,
Jeykumar S.

> 
> Anyways, just my $0.02.
> 
> Sean
> 
>> 
>> Thanks,
>> Jeykumar S.
>> >
>> > > +			if (!hw_ctl[ctl_index]) {
>> > >  				DPU_ERROR_ENC(dpu_enc, "no ctl block
>> > assigned"
>> > > -					     "at idx: %d\n", i);
>> > > +					     "at idx: %d\n", ctl_index);
>> > >  				return;
>> >
>> > When you return on error here, should you give back the resources that
>> > you've
>> > already provisioned?
>> >
>> > >  			}
>> > >
>> > >  			phys->hw_pp = dpu_enc->hw_pp[i];
>> > > -			phys->hw_ctl = hw_ctl[i];
>> > > +			phys->hw_ctl = hw_ctl[ctl_index];
>> > >
>> > >  			phys->connector = conn->state->connector;
>> > >  			if (phys->ops.mode_set)
>> > > --
>> > > The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
>> > Forum,
>> > > a Linux Foundation Collaborative Project
>> > >
>> > > _______________________________________________
>> > > Freedreno mailing list
>> > > Freedreno@lists.freedesktop.org
>> > > https://lists.freedesktop.org/mailman/listinfo/freedreno
>> 
>> --
>> Jeykumar S
On Wed, Oct 10, 2018 at 11:35:56AM -0700, Jeykumar Sankaran wrote:
> On 2018-10-10 07:29, Sean Paul wrote:
> > On Tue, Oct 09, 2018 at 10:46:41PM -0700, Jeykumar Sankaran wrote:
> > > On 2018-10-09 11:07, Sean Paul wrote:
> > > > On Mon, Oct 08, 2018 at 09:27:18PM -0700, Jeykumar Sankaran wrote:
> > > > > Layer mixer/pingpong block counts and hw ctl block counts
> > > > > will not be same for all the topologies (e.g. layer
> > > > > mixer muxing to single interface)
> > > > >
> > > > > Use the encoder's split_role info to retrieve the
> > > > > respective control path for programming.
> > > > >
> > > > > Signed-off-by: Jeykumar Sankaran <jsanka@codeaurora.org>
> > > > > ---
> > > > >  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 9 ++++++---
> > > > >  1 file changed, 6 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> > > > b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> > > > > index 96cdf06..d12f896 100644
> > > > > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> > > > > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> > > > > @@ -1060,6 +1060,7 @@ static void dpu_encoder_virt_mode_set(struct
> > > > drm_encoder *drm_enc,
> > > > >
> > > > >  	for (i = 0; i < dpu_enc->num_phys_encs; i++) {
> > > > >  		struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i];
> > > > > +		int ctl_index;
> > > > >
> > > > >  		if (phys) {
> > > > >  			if (!dpu_enc->hw_pp[i]) {
> > > > > @@ -1068,14 +1069,16 @@ static void dpu_encoder_virt_mode_set(struct
> > > > drm_encoder *drm_enc,
> > > > >  				return;
> > > > >  			}
> > > > >
> > > > > -			if (!hw_ctl[i]) {
> > > > > +			ctl_index = phys->split_role == ENC_ROLE_SLAVE ? 1
> > > > : 0;
> > > > > +
> > > >
> > > > What if MAX_CHANNELS_PER_ENC isn't 2? Similarly, what if num_phys_encs
> > > 
> > > > MAX_CHANNELS_PER_ENC? It seems like there should be a more formal
> > > > relationship
> > > > between all of these verious values (num_of_h_tiles assumed to be <= 2
> > > > as
> > > > well).
> > > > If one of them changes beyond the assumed bound, the rest of the
> > driver
> > > > falls
> > > > over pretty hard.
> > > >
> > > MAX_CHANNELS_PER_ENC is set to 2 to represent HW limitation on the
> > chipset
> > > as
> > > we cannot gang up more than 2 LM chain to an interface. Supporting
> > > more
> > than
> > > 2
> > > might demand much larger changes than validating for boundaries.
> > > 
> > > num_phys_enc is the max no of phys encoders we create as we are
> > > looping
> > > through
> > > num_of_h_tiles which cannot be more than priv->dsi array size.
> > > 
> > > So its very unlikely we would expect these loops to go out of bound!
> > 
> > For now, sure. However a new revision of hardware will be a pain to add
> > support
> > for if we add more assumptions, and secondly it makes it _really_ hard
> > to
> > understand the code if you don't have Qualcomm employee-level access to
> > the
> > hardware design :).
> > 
> I am having a hard time understanding why you have to see these counts as
> "assumptions".


        ctl_index = phys->split_role == ENC_ROLE_SLAVE ? 1 : 0
        if (!hw_ctl[ctl_index]) {

Assumes that ARRAY_SIZE(hw_ctl) > 1 if phys is a slave


        if (!dpu_enc->hw_pp[i]) {

Assumes that ARRAY_SIZE(hw_pp) >= ARRAY_SIZE(dpu_enc->phys_encs)


> 
> Except for MAX_CHANNELS_PER_ENC, all the other counts are either calculated
> or derived from the other modules linked to the topology.
> 
> h_tiles is the drm_connector terminology which represents the number of
> panels
> the display is driving. We use this information to determine the HW
> block chains in the MDP. HW blocks counts (pp or ctl) need not be same
> as the h_tile count to replace them with.

I must be missing something, because it seems that there's a 1:1 mapping of

num_h_tiles : num_phys_encs

And for each physical encoder, there's exactly one intf, pp, ctl assigned to it.
The issue I had was that when we loop through N phys_encs, we assume that there
are at least N hw blocks.

I think the takeaway is that I don't understand the hardware and its constraints
as well as you, so if this makes sense to you, let's go with it. Perhaps I'll
stare at it a while longer to try to refine my mental model.

Sean

> 
> I believe maintaining the counts independently at each layer allows us to
> have more
> flexibility to support independent HW chaining for future revisions.
> 
> Would it be more convincing if I get the MAX_CHANNELS_PER_ENC value from
> catalog.c?
> 
> > So this is why I'm advocating for the reduction of the number of
> > "num_of_"
> > values we assume are all in the same range. It's a lot easier to
> > understand the
> > hardware when you can see that a phys encoder is needed per h tile, and
> > that a
> > ctl/pp is needed per phys encoder.
> This is exactly the idea I don't want to convey to the reader. For the LM
> merge path,
> each phys encoder will not be having its own control. Based on the topology
> we
> are supporting, HW block counts can vary. We can even drive:
> - 2 interfaces with 1 ctl and 1 ping pong
> - 1 interface with 1 ctl and 2 ping pongs
> - 1 interface with 1 ctl and 1 ping pong
> 
> Thanks,
> Jeykumar S.
> 
> > 
> > Anyways, just my $0.02.
> > 
> > Sean
> > 
> > > 
> > > Thanks,
> > > Jeykumar S.
> > > >
> > > > > +			if (!hw_ctl[ctl_index]) {
> > > > >  				DPU_ERROR_ENC(dpu_enc, "no ctl block
> > > > assigned"
> > > > > -					     "at idx: %d\n", i);
> > > > > +					     "at idx: %d\n", ctl_index);
> > > > >  				return;
> > > >
> > > > When you return on error here, should you give back the resources that
> > > > you've
> > > > already provisioned?
> > > >
> > > > >  			}
> > > > >
> > > > >  			phys->hw_pp = dpu_enc->hw_pp[i];
> > > > > -			phys->hw_ctl = hw_ctl[i];
> > > > > +			phys->hw_ctl = hw_ctl[ctl_index];
> > > > >
> > > > >  			phys->connector = conn->state->connector;
> > > > >  			if (phys->ops.mode_set)
> > > > > --
> > > > > The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
> > > > Forum,
> > > > > a Linux Foundation Collaborative Project
> > > > >
> > > > > _______________________________________________
> > > > > Freedreno mailing list
> > > > > Freedreno@lists.freedesktop.org
> > > > > https://lists.freedesktop.org/mailman/listinfo/freedreno
> > > 
> > > --
> > > Jeykumar S
> 
> -- 
> Jeykumar S