[v2] anv/allocator: Avoid race condition in anv_block_pool_map.

Submitted by Rafael Antognolli on Jan. 23, 2019, 11:26 p.m.

Details

Message ID 20190123232614.22683-1-rafael.antognolli@intel.com
State New
Headers show
Series "anv/allocator: Avoid race condition in anv_block_pool_map." ( rev: 2 ) in Mesa

Not browsing as part of any series.

Commit Message

Rafael Antognolli Jan. 23, 2019, 11:26 p.m.
Accessing bo->map and then pool->center_bo_offset without a lock is
racy. One way of avoiding such race condition is to store the bo->map +
center_bo_offset into pool->map at the time the block pool is growing,
which happens within a lock.

v2: Only set pool->map if not using softpin (Jason).

Cc: Jason Ekstrand <jason@jlekstrand.net>
Reported-by: Ian Romanick <idr@freedesktop.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109442
Fixes: fc3f58832015cbb177179e7f3420d3611479b4a9
---
 src/intel/vulkan/anv_allocator.c | 11 +++++++++--
 src/intel/vulkan/anv_private.h   | 13 +++++++++++++
 2 files changed, 22 insertions(+), 2 deletions(-)

Patch hide | download patch | download mbox

diff --git a/src/intel/vulkan/anv_allocator.c b/src/intel/vulkan/anv_allocator.c
index 89f26789c85..67f9c2a948d 100644
--- a/src/intel/vulkan/anv_allocator.c
+++ b/src/intel/vulkan/anv_allocator.c
@@ -437,6 +437,7 @@  anv_block_pool_init(struct anv_block_pool *pool,
    pool->nbos = 0;
    pool->size = 0;
    pool->start_address = gen_canonical_address(start_address);
+   pool->map = NULL;
 
    /* This pointer will always point to the first BO in the list */
    pool->bo = &pool->bos[0];
@@ -576,6 +577,8 @@  anv_block_pool_expand_range(struct anv_block_pool *pool,
    /* Now that we successfull allocated everything, we can write the new
     * center_bo_offset back into pool. */
    pool->center_bo_offset = center_bo_offset;
+   if (!use_softpin)
+      pool->map = map + center_bo_offset;
 
    /* For block pool BOs we have to be a bit careful about where we place them
     * in the GTT.  There are two documented workarounds for state base address
@@ -670,8 +673,12 @@  anv_block_pool_get_bo(struct anv_block_pool *pool, int32_t *offset)
 void*
 anv_block_pool_map(struct anv_block_pool *pool, int32_t offset)
 {
-   struct anv_bo *bo = anv_block_pool_get_bo(pool, &offset);
-   return bo->map + pool->center_bo_offset + offset;
+   if (pool->bo_flags & EXEC_OBJECT_PINNED) {
+      struct anv_bo *bo = anv_block_pool_get_bo(pool, &offset);
+      return bo->map + offset;
+   } else {
+      return pool->map + offset;
+   }
 }
 
 /** Grows and re-centers the block pool.
diff --git a/src/intel/vulkan/anv_private.h b/src/intel/vulkan/anv_private.h
index 3889065c93c..110b2ccf023 100644
--- a/src/intel/vulkan/anv_private.h
+++ b/src/intel/vulkan/anv_private.h
@@ -663,6 +663,19 @@  struct anv_block_pool {
     */
    uint32_t center_bo_offset;
 
+   /* Current memory map of the block pool.  This pointer may or may not
+    * point to the actual beginning of the block pool memory.  If
+    * anv_block_pool_alloc_back has ever been called, then this pointer
+    * will point to the "center" position of the buffer and all offsets
+    * (negative or positive) given out by the block pool alloc functions
+    * will be valid relative to this pointer.
+    *
+    * In particular, map == bo.map + center_offset
+    *
+    * DO NOT access this pointer directly. Use anv_block_pool_map() instead,
+    * since it will handle the softpin case as well, where this points to NULL.
+    */
+   void *map;
    int fd;
 
    /**

Comments

On Wed, Jan 23, 2019 at 5:26 PM Rafael Antognolli <
rafael.antognolli@intel.com> wrote:

> Accessing bo->map and then pool->center_bo_offset without a lock is
> racy. One way of avoiding such race condition is to store the bo->map +
> center_bo_offset into pool->map at the time the block pool is growing,
> which happens within a lock.
>
> v2: Only set pool->map if not using softpin (Jason).
>
> Cc: Jason Ekstrand <jason@jlekstrand.net>
> Reported-by: Ian Romanick <idr@freedesktop.org>
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109442
> Fixes: fc3f58832015cbb177179e7f3420d3611479b4a9
> ---
>  src/intel/vulkan/anv_allocator.c | 11 +++++++++--
>  src/intel/vulkan/anv_private.h   | 13 +++++++++++++
>  2 files changed, 22 insertions(+), 2 deletions(-)
>
> diff --git a/src/intel/vulkan/anv_allocator.c
> b/src/intel/vulkan/anv_allocator.c
> index 89f26789c85..67f9c2a948d 100644
> --- a/src/intel/vulkan/anv_allocator.c
> +++ b/src/intel/vulkan/anv_allocator.c
> @@ -437,6 +437,7 @@ anv_block_pool_init(struct anv_block_pool *pool,
>     pool->nbos = 0;
>     pool->size = 0;
>     pool->start_address = gen_canonical_address(start_address);
> +   pool->map = NULL;
>
>     /* This pointer will always point to the first BO in the list */
>     pool->bo = &pool->bos[0];
> @@ -576,6 +577,8 @@ anv_block_pool_expand_range(struct anv_block_pool
> *pool,
>     /* Now that we successfull allocated everything, we can write the new
>      * center_bo_offset back into pool. */
>     pool->center_bo_offset = center_bo_offset;
> +   if (!use_softpin)
> +      pool->map = map + center_bo_offset;
>

We could also put this a bit higher up right after where we actually call
mmap.  That would reduce the number of "if (use_softpin)" blocks and
probably make things more readable.

Come to think of it, we could also set pool->center_bo_offset there and
just assert(center_bo_offset == 0) in the softpin case.  I like that.
Maybe that's a second patch?

In any case, this fixes today's bug

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>

--Jason


>
>     /* For block pool BOs we have to be a bit careful about where we place
> them
>      * in the GTT.  There are two documented workarounds for state base
> address
> @@ -670,8 +673,12 @@ anv_block_pool_get_bo(struct anv_block_pool *pool,
> int32_t *offset)
>  void*
>  anv_block_pool_map(struct anv_block_pool *pool, int32_t offset)
>  {
> -   struct anv_bo *bo = anv_block_pool_get_bo(pool, &offset);
> -   return bo->map + pool->center_bo_offset + offset;
> +   if (pool->bo_flags & EXEC_OBJECT_PINNED) {
> +      struct anv_bo *bo = anv_block_pool_get_bo(pool, &offset);
> +      return bo->map + offset;
> +   } else {
> +      return pool->map + offset;
> +   }
>  }
>
>  /** Grows and re-centers the block pool.
> diff --git a/src/intel/vulkan/anv_private.h
> b/src/intel/vulkan/anv_private.h
> index 3889065c93c..110b2ccf023 100644
> --- a/src/intel/vulkan/anv_private.h
> +++ b/src/intel/vulkan/anv_private.h
> @@ -663,6 +663,19 @@ struct anv_block_pool {
>      */
>     uint32_t center_bo_offset;
>
> +   /* Current memory map of the block pool.  This pointer may or may not
> +    * point to the actual beginning of the block pool memory.  If
> +    * anv_block_pool_alloc_back has ever been called, then this pointer
> +    * will point to the "center" position of the buffer and all offsets
> +    * (negative or positive) given out by the block pool alloc functions
> +    * will be valid relative to this pointer.
> +    *
> +    * In particular, map == bo.map + center_offset
> +    *
> +    * DO NOT access this pointer directly. Use anv_block_pool_map()
> instead,
> +    * since it will handle the softpin case as well, where this points to
> NULL.
> +    */
> +   void *map;
>     int fd;
>
>     /**
> --
> 2.17.2
>
>
On Wed, Jan 23, 2019 at 06:08:50PM -0600, Jason Ekstrand wrote:
> On Wed, Jan 23, 2019 at 5:26 PM Rafael Antognolli <rafael.antognolli@intel.com>
> wrote:
> 
>     Accessing bo->map and then pool->center_bo_offset without a lock is
>     racy. One way of avoiding such race condition is to store the bo->map +
>     center_bo_offset into pool->map at the time the block pool is growing,
>     which happens within a lock.
> 
>     v2: Only set pool->map if not using softpin (Jason).
> 
>     Cc: Jason Ekstrand <jason@jlekstrand.net>
>     Reported-by: Ian Romanick <idr@freedesktop.org>
>     Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109442
>     Fixes: fc3f58832015cbb177179e7f3420d3611479b4a9
>     ---
>      src/intel/vulkan/anv_allocator.c | 11 +++++++++--
>      src/intel/vulkan/anv_private.h   | 13 +++++++++++++
>      2 files changed, 22 insertions(+), 2 deletions(-)
> 
>     diff --git a/src/intel/vulkan/anv_allocator.c b/src/intel/vulkan/
>     anv_allocator.c
>     index 89f26789c85..67f9c2a948d 100644
>     --- a/src/intel/vulkan/anv_allocator.c
>     +++ b/src/intel/vulkan/anv_allocator.c
>     @@ -437,6 +437,7 @@ anv_block_pool_init(struct anv_block_pool *pool,
>         pool->nbos = 0;
>         pool->size = 0;
>         pool->start_address = gen_canonical_address(start_address);
>     +   pool->map = NULL;
> 
>         /* This pointer will always point to the first BO in the list */
>         pool->bo = &pool->bos[0];
>     @@ -576,6 +577,8 @@ anv_block_pool_expand_range(struct anv_block_pool
>     *pool,
>         /* Now that we successfull allocated everything, we can write the new
>          * center_bo_offset back into pool. */
>         pool->center_bo_offset = center_bo_offset;
>     +   if (!use_softpin)
>     +      pool->map = map + center_bo_offset;
> 
> 
> We could also put this a bit higher up right after where we actually call
> mmap.  That would reduce the number of "if (use_softpin)" blocks and probably
> make things more readable.
> 
> Come to think of it, we could also set pool->center_bo_offset there and just
> assert(center_bo_offset == 0) in the softpin case.  I like that.  Maybe that's
> a second patch?

I wouldn't mind sending a v3. In fact, I have it ready and will right
after this email). But I can also simply push this one and send a new
patch tomorrow, if it's too late to get a review on the v3.

> In any case, this fixes today's bug
> 
> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>

Thanks!
Rafael

> --Jason
>  
> 
> 
>         /* For block pool BOs we have to be a bit careful about where we place
>     them
>          * in the GTT.  There are two documented workarounds for state base
>     address
>     @@ -670,8 +673,12 @@ anv_block_pool_get_bo(struct anv_block_pool *pool,
>     int32_t *offset)
>      void*
>      anv_block_pool_map(struct anv_block_pool *pool, int32_t offset)
>      {
>     -   struct anv_bo *bo = anv_block_pool_get_bo(pool, &offset);
>     -   return bo->map + pool->center_bo_offset + offset;
>     +   if (pool->bo_flags & EXEC_OBJECT_PINNED) {
>     +      struct anv_bo *bo = anv_block_pool_get_bo(pool, &offset);
>     +      return bo->map + offset;
>     +   } else {
>     +      return pool->map + offset;
>     +   }
>      }
> 
>      /** Grows and re-centers the block pool.
>     diff --git a/src/intel/vulkan/anv_private.h b/src/intel/vulkan/
>     anv_private.h
>     index 3889065c93c..110b2ccf023 100644
>     --- a/src/intel/vulkan/anv_private.h
>     +++ b/src/intel/vulkan/anv_private.h
>     @@ -663,6 +663,19 @@ struct anv_block_pool {
>          */
>         uint32_t center_bo_offset;
> 
>     +   /* Current memory map of the block pool.  This pointer may or may not
>     +    * point to the actual beginning of the block pool memory.  If
>     +    * anv_block_pool_alloc_back has ever been called, then this pointer
>     +    * will point to the "center" position of the buffer and all offsets
>     +    * (negative or positive) given out by the block pool alloc functions
>     +    * will be valid relative to this pointer.
>     +    *
>     +    * In particular, map == bo.map + center_offset
>     +    *
>     +    * DO NOT access this pointer directly. Use anv_block_pool_map()
>     instead,
>     +    * since it will handle the softpin case as well, where this points to
>     NULL.
>     +    */
>     +   void *map;
>         int fd;
> 
>         /**
>     --
>     2.17.2
> 
> 

> _______________________________________________
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev