[v8,6/6] drm/i915: Cache last IRQ seqno to reduce IRQ overhead

Submitted by John Harrison on May 12, 2016, 9:06 p.m.

Details

Message ID 1463087196-11688-7-git-send-email-John.C.Harrison@Intel.com
State New
Headers show
Series "Convert requests to use struct fence" ( rev: 5 ) in Intel GFX

Not browsing as part of any series.

Commit Message

John Harrison May 12, 2016, 9:06 p.m.
From: John Harrison <John.C.Harrison@Intel.com>

The notify function can be called many times without the seqno
changing. Some are to prevent races due to the requirement of not
enabling interrupts until requested. However, when interrupts are
enabled the IRQ handler can be called multiple times without the
ring's seqno value changing. E.g. two interrupts are generated by
batch buffers completing in quick succession, the first call to the
handler processes both completions but the handler still gets executed
a second time. This patch reduces the overhead of these extra calls by
caching the last processed seqno value and early exiting if it has not
changed.

v3: New patch for series.

v5: Added comment about last_irq_seqno usage due to code review
feedback (Tvrtko Ursulin).

v6: Minor update to resolve a race condition with the wait_request
optimisation.

v7: Updated to newer nightly - lots of ring -> engine renaming plus an
interface change to get_seqno().

For: VIZ-5190
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c         | 26 ++++++++++++++++++++++----
 drivers/gpu/drm/i915/intel_ringbuffer.h |  1 +
 2 files changed, 23 insertions(+), 4 deletions(-)

Patch hide | download patch | download mbox

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 4f4e445..9ae6148 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1369,6 +1369,7 @@  out:
 			 * request has not actually been fully processed yet.
 			 */
 			spin_lock_irq(&req->engine->fence_lock);
+			req->engine->last_irq_seqno = 0;
 			i915_gem_request_notify(req->engine, true);
 			spin_unlock_irq(&req->engine->fence_lock);
 		}
@@ -2577,9 +2578,12 @@  i915_gem_init_seqno(struct drm_device *dev, u32 seqno)
 	i915_gem_retire_requests(dev);
 
 	/* Finally reset hw state */
-	for_each_engine(engine, dev_priv)
+	for_each_engine(engine, dev_priv) {
 		intel_ring_init_seqno(engine, seqno);
 
+		engine->last_irq_seqno = 0;
+	}
+
 	return 0;
 }
 
@@ -2900,13 +2904,24 @@  void i915_gem_request_notify(struct intel_engine_cs *engine, bool fence_locked)
 		return;
 	}
 
-	if (!fence_locked)
-		spin_lock_irqsave(&engine->fence_lock, flags);
-
+	/*
+	 * Check for a new seqno. If it hasn't actually changed then early
+	 * exit without even grabbing the spinlock. Note that this is safe
+	 * because any corruption of last_irq_seqno merely results in doing
+	 * the full processing when there is potentially no work to be done.
+	 * It can never lead to not processing work that does need to happen.
+	 */
 	if (engine->irq_seqno_barrier)
 		engine->irq_seqno_barrier(engine);
 	seqno = engine->get_seqno(engine);
 	trace_i915_gem_request_notify(engine, seqno);
+	if (seqno == engine->last_irq_seqno)
+		return;
+
+	if (!fence_locked)
+		spin_lock_irqsave(&engine->fence_lock, flags);
+
+	engine->last_irq_seqno = seqno;
 
 	list_for_each_entry_safe(req, req_next, &engine->fence_signal_list, signal_link) {
 		if (!req->cancelled) {
@@ -3201,7 +3216,10 @@  static void i915_gem_reset_engine_cleanup(struct drm_i915_private *dev_priv,
 	 * Tidy up anything left over. This includes a call to
 	 * i915_gem_request_notify() which will make sure that any requests
 	 * that were on the signal pending list get also cleaned up.
+	 * NB: The seqno cache must be cleared otherwise the notify call will
+	 * simply return immediately.
 	 */
+	engine->last_irq_seqno = 0;
 	i915_gem_retire_requests_ring(engine);
 
 	/* Having flushed all requests from all queues, we know that all
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 113646c..1381e52 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -348,6 +348,7 @@  struct  intel_engine_cs {
 
 	spinlock_t fence_lock;
 	struct list_head fence_signal_list;
+	uint32_t last_irq_seqno;
 };
 
 static inline bool

Comments

On Thu, May 12, 2016 at 10:06:36PM +0100, John.C.Harrison@Intel.com wrote:
> From: John Harrison <John.C.Harrison@Intel.com>
> 
> The notify function can be called many times without the seqno
> changing. Some are to prevent races due to the requirement of not
> enabling interrupts until requested. However, when interrupts are
> enabled the IRQ handler can be called multiple times without the
> ring's seqno value changing. E.g. two interrupts are generated by
> batch buffers completing in quick succession, the first call to the
> handler processes both completions but the handler still gets executed
> a second time. This patch reduces the overhead of these extra calls by
> caching the last processed seqno value and early exiting if it has not
> changed.

The idea is not to cache the last seqno (since that value is already
cached!) but to post new irq events.

Compare and contrast
https://patchwork.freedesktop.org/patch/85664/
-Chris