drm/amdgpu/gfx7: move eop programming per queue

Submitted by Alex Deucher on Nov. 23, 2016, 8:27 p.m.

Details

Message ID 1479932826-3362-2-git-send-email-alexander.deucher@amd.com
State New
Headers show
Series "drm/amdgpu/gfx8: move eop programming per queue" ( rev: 2 ) in AMD X.Org drivers

Not browsing as part of any series.

Commit Message

Alex Deucher Nov. 23, 2016, 8:27 p.m.
It's per queue not per pipe.

Reviewed-by: Christian K├Ânig <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 48 ++++++++++++++---------------------
 1 file changed, 19 insertions(+), 29 deletions(-)

Patch hide | download patch | download mbox

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
index 1a745cf..1cde80f 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
@@ -2815,7 +2815,7 @@  static int gfx_v7_0_mec_init(struct amdgpu_device *adev)
 
 	if (adev->gfx.mec.hpd_eop_obj == NULL) {
 		r = amdgpu_bo_create(adev,
-				     adev->gfx.mec.num_mec *adev->gfx.mec.num_pipe * MEC_HPD_SIZE * 2,
+				     adev->gfx.mec.num_queue * MEC_HPD_SIZE,
 				     PAGE_SIZE, true,
 				     AMDGPU_GEM_DOMAIN_GTT, 0, NULL, NULL,
 				     &adev->gfx.mec.hpd_eop_obj);
@@ -2845,7 +2845,7 @@  static int gfx_v7_0_mec_init(struct amdgpu_device *adev)
 	}
 
 	/* clear memory.  Not sure if this is required or not */
-	memset(hpd, 0, adev->gfx.mec.num_mec *adev->gfx.mec.num_pipe * MEC_HPD_SIZE * 2);
+	memset(hpd, 0, adev->gfx.mec.num_queue * MEC_HPD_SIZE);
 
 	amdgpu_bo_kunmap(adev->gfx.mec.hpd_eop_obj);
 	amdgpu_bo_unreserve(adev->gfx.mec.hpd_eop_obj);
@@ -2947,33 +2947,7 @@  static int gfx_v7_0_cp_compute_resume(struct amdgpu_device *adev)
 	tmp |= (1 << 23);
 	WREG32(mmCP_CPF_DEBUG, tmp);
 
-	/* init the pipes */
-	mutex_lock(&adev->srbm_mutex);
-	for (i = 0; i < (adev->gfx.mec.num_pipe * adev->gfx.mec.num_mec); i++) {
-		int me = (i < 4) ? 1 : 2;
-		int pipe = (i < 4) ? i : (i - 4);
-
-		eop_gpu_addr = adev->gfx.mec.hpd_eop_gpu_addr + (i * MEC_HPD_SIZE * 2);
-
-		cik_srbm_select(adev, me, pipe, 0, 0);
-
-		/* write the EOP addr */
-		WREG32(mmCP_HPD_EOP_BASE_ADDR, eop_gpu_addr >> 8);
-		WREG32(mmCP_HPD_EOP_BASE_ADDR_HI, upper_32_bits(eop_gpu_addr) >> 8);
-
-		/* set the VMID assigned */
-		WREG32(mmCP_HPD_EOP_VMID, 0);
-
-		/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
-		tmp = RREG32(mmCP_HPD_EOP_CONTROL);
-		tmp &= ~CP_HPD_EOP_CONTROL__EOP_SIZE_MASK;
-		tmp |= order_base_2(MEC_HPD_SIZE / 8);
-		WREG32(mmCP_HPD_EOP_CONTROL, tmp);
-	}
-	cik_srbm_select(adev, 0, 0, 0, 0);
-	mutex_unlock(&adev->srbm_mutex);
-
-	/* init the queues.  Just two for now. */
+	/* init the queues. */
 	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
 		ring = &adev->gfx.compute_ring[i];
 
@@ -3023,6 +2997,22 @@  static int gfx_v7_0_cp_compute_resume(struct amdgpu_device *adev)
 				ring->pipe,
 				ring->queue, 0);
 
+		eop_gpu_addr = adev->gfx.mec.hpd_eop_gpu_addr + (i * MEC_HPD_SIZE);
+		eop_gpu_addr >>= 8;
+
+		/* write the EOP addr */
+		WREG32(mmCP_HPD_EOP_BASE_ADDR, lower_32_bits(eop_gpu_addr));
+		WREG32(mmCP_HPD_EOP_BASE_ADDR_HI, upper_32_bits(eop_gpu_addr));
+
+		/* set the VMID assigned */
+		WREG32(mmCP_HPD_EOP_VMID, 0);
+
+		/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+		tmp = RREG32(mmCP_HPD_EOP_CONTROL);
+		tmp &= ~CP_HPD_EOP_CONTROL__EOP_SIZE_MASK;
+		tmp |= order_base_2(MEC_HPD_SIZE / 8);
+		WREG32(mmCP_HPD_EOP_CONTROL, tmp);
+
 		/* disable wptr polling */
 		tmp = RREG32(mmCP_PQ_WPTR_POLL_CNTL);
 		tmp &= ~CP_PQ_WPTR_POLL_CNTL__EN_MASK;

Comments

On Wed, Nov 23, 2016, at 14:27, Alex Deucher wrote:
> It's per queue not per pipe.

Are you sure? I was under the impression that EOP queeus were per-pipe
on Gfx7 and per-queue on Gfx8 onwards (to support context save/restore).
It's also hinted at by the register name (HPD == Hardware Pipe
Descriptor, HQD == Hardware Queue Descriptor).
On Wed, Nov 23, 2016 at 6:00 PM, Jay Cornwall <jay@jcornwall.me> wrote:
> On Wed, Nov 23, 2016, at 14:27, Alex Deucher wrote:
>> It's per queue not per pipe.
>
> Are you sure? I was under the impression that EOP queeus were per-pipe
> on Gfx7 and per-queue on Gfx8 onwards (to support context save/restore).
> It's also hinted at by the register name (HPD == Hardware Pipe
> Descriptor, HQD == Hardware Queue Descriptor).

No, I'm not sure.  I have a patch set for amdgpu to leave the
fb_location as programmed by the vbios rather than reprogramming which
resulted in vram not being at 0 in the GPU's address space.  I then
got ring failures on the compute queues until I made the eops per
queue.  It could be something else, but this seemed to fix it.

Alex
On Wed, Nov 23, 2016 at 6:49 PM, Alex Deucher <alexdeucher@gmail.com> wrote:
> On Wed, Nov 23, 2016 at 6:00 PM, Jay Cornwall <jay@jcornwall.me> wrote:
>> On Wed, Nov 23, 2016, at 14:27, Alex Deucher wrote:
>>> It's per queue not per pipe.
>>
>> Are you sure? I was under the impression that EOP queeus were per-pipe
>> on Gfx7 and per-queue on Gfx8 onwards (to support context save/restore).
>> It's also hinted at by the register name (HPD == Hardware Pipe
>> Descriptor, HQD == Hardware Queue Descriptor).
>
> No, I'm not sure.  I have a patch set for amdgpu to leave the
> fb_location as programmed by the vbios rather than reprogramming which
> resulted in vram not being at 0 in the GPU's address space.  I then
> got ring failures on the compute queues until I made the eops per
> queue.  It could be something else, but this seemed to fix it.

Actually I've only tried this on gfx8, I just assumed gfx7 was also
affected, but it's possible this is not needed for gfx 7.

Alex
On Wed, Nov 23, 2016 at 6:50 PM, Alex Deucher <alexdeucher@gmail.com> wrote:
> On Wed, Nov 23, 2016 at 6:49 PM, Alex Deucher <alexdeucher@gmail.com> wrote:
>> On Wed, Nov 23, 2016 at 6:00 PM, Jay Cornwall <jay@jcornwall.me> wrote:
>>> On Wed, Nov 23, 2016, at 14:27, Alex Deucher wrote:
>>>> It's per queue not per pipe.
>>>
>>> Are you sure? I was under the impression that EOP queeus were per-pipe
>>> on Gfx7 and per-queue on Gfx8 onwards (to support context save/restore).
>>> It's also hinted at by the register name (HPD == Hardware Pipe
>>> Descriptor, HQD == Hardware Queue Descriptor).
>>
>> No, I'm not sure.  I have a patch set for amdgpu to leave the
>> fb_location as programmed by the vbios rather than reprogramming which
>> resulted in vram not being at 0 in the GPU's address space.  I then
>> got ring failures on the compute queues until I made the eops per
>> queue.  It could be something else, but this seemed to fix it.
>
> Actually I've only tried this on gfx8, I just assumed gfx7 was also
> affected, but it's possible this is not needed for gfx 7.

consider this patch withdrawn.

Alex