[RFCv3,11/15] drm/i915: Introduce execlist context status change notification

Submitted by Wang, Zhi A on March 11, 2016, 10:59 a.m.

Details

Message ID 1457693986-6892-12-git-send-email-zhi.a.wang@intel.com
State New
Headers show
Series "Introduce GVT context support" ( rev: 1 ) in Intel GFX

Not browsing as part of any series.

Commit Message

Wang, Zhi A March 11, 2016, 10:59 a.m.
This patch introduces an approach to track the execlist context status
change.

GVT-g uses GVT context as the "shadow context". The content inside GVT
context will be copied back to guest after the context is idle. So GVT-g
has to know the status of the execlist context.

This function is configurable in the context creation service. Currently,
Only GVT-g will create the "status-change-notification" enabled GEM
context.

Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h  |  2 ++
 drivers/gpu/drm/i915/intel_lrc.c | 28 ++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/intel_lrc.h |  6 ++++++
 3 files changed, 36 insertions(+)

Patch hide | download patch | download mbox

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1281bbf..68b821a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -892,6 +892,8 @@  struct intel_context {
 		u64 lrc_desc;
 		uint32_t *lrc_reg_state;
 		bool root_pointer_dirty;
+		bool need_status_change_notification;
+		struct atomic_notifier_head status_notifier_head;
 	} engine[I915_NUM_RINGS];
 
 	struct list_head link;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 19c6b46..ae1ab92 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -439,6 +439,18 @@  static void execlists_submit_requests(struct drm_i915_gem_request *rq0,
 	execlists_elsp_write(rq0, rq1);
 }
 
+static inline void execlists_context_status_change(
+		struct intel_context *ctx,
+		struct intel_engine_cs *ring,
+		unsigned long status)
+{
+	if (!ctx->engine[ring->id].need_status_change_notification)
+		return;
+
+	atomic_notifier_call_chain(&ctx->engine[ring->id].status_notifier_head,
+			status, NULL);
+}
+
 static void execlists_context_unqueue(struct intel_engine_cs *ring)
 {
 	struct drm_i915_gem_request *req0 = NULL, *req1 = NULL;
@@ -495,6 +507,13 @@  static void execlists_context_unqueue(struct intel_engine_cs *ring)
 
 	WARN_ON(req1 && req1->elsp_submitted);
 
+	execlists_context_status_change(req0->ctx, ring, CONTEXT_SCHEDULE_IN);
+
+	if (req1) {
+		execlists_context_status_change(req1->ctx,
+			ring, CONTEXT_SCHEDULE_IN);
+	}
+
 	execlists_submit_requests(req0, req1);
 }
 
@@ -515,6 +534,8 @@  static bool execlists_check_remove_request(struct intel_engine_cs *ring,
 			     "Never submitted head request\n");
 
 			if (--head_req->elsp_submitted <= 0) {
+				execlists_context_status_change(head_req->ctx,
+					ring, CONTEXT_SCHEDULE_OUT);
 				list_move_tail(&head_req->execlist_link,
 					       &ring->execlist_retired_req_list);
 				return true;
@@ -2590,6 +2611,13 @@  int __intel_lr_context_deferred_alloc(struct intel_context *ctx,
 		}
 		i915_add_request_no_flush(req);
 	}
+
+	if (params->ctx_needs_status_change_notification) {
+		ctx->engine[ring->id].need_status_change_notification = true;
+		ATOMIC_INIT_NOTIFIER_HEAD(
+			&ctx->engine[ring->id].status_notifier_head);
+	}
+
 	return 0;
 
 error_ringbuf:
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 528c4fb..15791d4 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -54,6 +54,11 @@ 
 #define GEN8_CSB_READ_PTR(csb_status) \
 	(((csb_status) & GEN8_CSB_READ_PTR_MASK) >> 8)
 
+enum {
+	CONTEXT_SCHEDULE_IN = 0,
+	CONTEXT_SCHEDULE_OUT,
+};
+
 /* Logical Rings */
 int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request);
 int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request);
@@ -101,6 +106,7 @@  struct intel_lr_context_alloc_params {
 	struct intel_engine_cs *ring;
 	u32 ringbuffer_size;
 	bool ctx_needs_init;
+	bool ctx_needs_status_change_notification;
 };
 
 void intel_lr_context_free(struct intel_context *ctx);

Comments

On Fri, Mar 11, 2016 at 06:59:42PM +0800, Zhi Wang wrote:
> This patch introduces an approach to track the execlist context status
> change.
> 
> GVT-g uses GVT context as the "shadow context". The content inside GVT
> context will be copied back to guest after the context is idle. So GVT-g
> has to know the status of the execlist context.
> 
> This function is configurable in the context creation service. Currently,
> Only GVT-g will create the "status-change-notification" enabled GEM
> context.

Nope. Please hook into the lower-frequency idle mechanism then.
-Chris
Hi Chirs:
     Could you elaborate your idea here? :) As we have to know the status change of the LRC context, not only the request to copy the content back to guest (There will be a small window between a request is finished and the context is switched out). We tried i915_wait_request() before, looks not help... And it seems only CSB change will show the context is really idle. 

-----Original Message-----
From: Chris Wilson [mailto:chris@chris-wilson.co.uk] 
Sent: Friday, March 11, 2016 7:28 PM
To: Wang, Zhi A
Cc: intel-gfx@lists.freedesktop.org; igvt-g@lists.01.org; Tian, Kevin; Lv, Zhiyuan; Niu, Bing; Song, Jike; daniel.vetter@ffwll.ch; Cowperthwaite, David J; joonas.lahtinen@linux.intel.com
Subject: Re: [RFCv3 11/15] drm/i915: Introduce execlist context status change notification

On Fri, Mar 11, 2016 at 06:59:42PM +0800, Zhi Wang wrote:
> This patch introduces an approach to track the execlist context status
> change.
> 
> GVT-g uses GVT context as the "shadow context". The content inside GVT
> context will be copied back to guest after the context is idle. So GVT-g
> has to know the status of the execlist context.
> 
> This function is configurable in the context creation service. Currently,
> Only GVT-g will create the "status-change-notification" enabled GEM
> context.

Nope. Please hook into the lower-frequency idle mechanism then.
-Chris