drm/i915/gvt: stop scheduling workload when vgpu is inactive

Submitted by Weinan Li on Feb. 27, 2019, 7:36 a.m.

Details

Message ID 1551253018-16671-1-git-send-email-weinan.z.li@intel.com
State New
Headers show
Series "drm/i915/gvt: stop scheduling workload when vgpu is inactive" ( rev: 1 ) in Intel GVT devel

Not browsing as part of any series.

Commit Message

Weinan Li Feb. 27, 2019, 7:36 a.m.
There is one corner case that workload_thread may pick and dispatch one
workload of vgpu after it's already deactivated. Below is the scenario:

1. deactive_vgpu got the vgpu_lock, it found pending workload was
submitted, then it released the vgpu_lock and wait for vgpu idle.
2. before deactive_vgpu got the vgpu_lock back, workload_thread might pick
one new valid workload, then it was blocked by the vgpu_lock.
3. deactive_vgpu got the vgpu_lock again, finished the last processes of
deactivating, then release the vgpu_lock.
4. workload_thread got the vgpu_lock, then it will try to dispatch the
fetched workload. It's not expected one workload of deactivated vgpu is
dispatched.

The solution is to add condition check of the vgpu's active flag and stop
to schedule when it's inactive.

Signed-off-by: Weinan Li <weinan.z.li@intel.com>
---
 drivers/gpu/drm/i915/gvt/scheduler.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Patch hide | download patch | download mbox

diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
index 1bb8f93..2bcb701 100644
--- a/drivers/gpu/drm/i915/gvt/scheduler.c
+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
@@ -739,7 +739,8 @@  static struct intel_vgpu_workload *pick_next_workload(
 		goto out;
 	}
 
-	if (list_empty(workload_q_head(scheduler->current_vgpu, ring_id)))
+	if (!scheduler->current_vgpu->active ||
+	    list_empty(workload_q_head(scheduler->current_vgpu, ring_id)))
 		goto out;
 
 	/*

Comments

On 2019.02.27 15:36:58 +0800, Weinan Li wrote:
> There is one corner case that workload_thread may pick and dispatch one
> workload of vgpu after it's already deactivated. Below is the scenario:
> 
> 1. deactive_vgpu got the vgpu_lock, it found pending workload was
> submitted, then it released the vgpu_lock and wait for vgpu idle.
> 2. before deactive_vgpu got the vgpu_lock back, workload_thread might pick
> one new valid workload, then it was blocked by the vgpu_lock.
> 3. deactive_vgpu got the vgpu_lock again, finished the last processes of
> deactivating, then release the vgpu_lock.
> 4. workload_thread got the vgpu_lock, then it will try to dispatch the
> fetched workload. It's not expected one workload of deactivated vgpu is
> dispatched.
> 
> The solution is to add condition check of the vgpu's active flag and stop
> to schedule when it's inactive.
> 
> Signed-off-by: Weinan Li <weinan.z.li@intel.com>
> ---
>  drivers/gpu/drm/i915/gvt/scheduler.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
> index 1bb8f93..2bcb701 100644
> --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> @@ -739,7 +739,8 @@ static struct intel_vgpu_workload *pick_next_workload(
>  		goto out;
>  	}
>  
> -	if (list_empty(workload_q_head(scheduler->current_vgpu, ring_id)))
> +	if (!scheduler->current_vgpu->active ||
> +	    list_empty(workload_q_head(scheduler->current_vgpu, ring_id)))
>  		goto out;
>  
>  	/*

looks sane to me.

Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>