Always mark GEM objects as dirty when written by the CPU

Submitted by Dave Gordon on Dec. 1, 2015, 12:42 p.m.

Details

Message ID 1448973722-34522-1-git-send-email-david.s.gordon@intel.com
State New
Headers show
Series "Always mark GEM objects as dirty when written by the CPU" ( rev: 1 ) in Intel GFX

Not browsing as part of any series.

Commit Message

Dave Gordon Dec. 1, 2015, 12:42 p.m.
In various places, one or more pages of a GEM object are mapped into CPU
address space and updated. In each such case, the object should be
marked dirty, to ensure that the modifications are not discarded if the
object is evicted under memory pressure.

This is similar to commit
	commit 51bc140431e233284660b1d22c47dec9ecdb521e
	Author: Chris Wilson <chris@chris-wilson.co.uk>
	Date:   Mon Aug 31 15:10:39 2015 +0100
	drm/i915: Always mark the object as dirty when used by the GPU

in which Chris ensured that updates by the GPU were not lost due to
eviction, but this patch applies instead to the multiple places where
object content is updated by the host CPU.

It also incorporates and supercedes Alex Dai's earlier patch
[PATCH v1] drm/i915/guc: Fix a fw content lost issue after it is evicted

Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Alex Dai <yu.dai@intel.com>
---
 drivers/gpu/drm/i915/i915_cmd_parser.c       | 1 +
 drivers/gpu/drm/i915/i915_gem.c              | 1 +
 drivers/gpu/drm/i915/i915_gem_dmabuf.c       | 2 ++
 drivers/gpu/drm/i915/i915_gem_execbuffer.c   | 2 ++
 drivers/gpu/drm/i915/i915_gem_render_state.c | 1 +
 drivers/gpu/drm/i915/i915_guc_submission.c   | 1 +
 drivers/gpu/drm/i915/intel_lrc.c             | 6 +++++-
 7 files changed, 13 insertions(+), 1 deletion(-)

Patch hide | download patch | download mbox

diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
index 814d894..292bd5d 100644
--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
+++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
@@ -945,6 +945,7 @@  static u32 *copy_batch(struct drm_i915_gem_object *dest_obj,
 		drm_clflush_virt_range(src, batch_len);
 
 	memcpy(dst, src, batch_len);
+	dest_obj->dirty = 1;
 
 unmap_src:
 	vunmap(src_base);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 33adc8f..76bacba 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5201,6 +5201,7 @@  i915_gem_object_create_from_data(struct drm_device *dev,
 	i915_gem_object_pin_pages(obj);
 	sg = obj->pages;
 	bytes = sg_copy_from_buffer(sg->sgl, sg->nents, (void *)data, size);
+	obj->dirty = 1;
 	i915_gem_object_unpin_pages(obj);
 
 	if (WARN_ON(bytes != size)) {
diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index e9c2bfd..49a74c6 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -208,6 +208,8 @@  static int i915_gem_begin_cpu_access(struct dma_buf *dma_buf, size_t start, size
 		return ret;
 
 	ret = i915_gem_object_set_to_cpu_domain(obj, write);
+	if (write)
+		obj->dirty = 1;
 	mutex_unlock(&dev->struct_mutex);
 	return ret;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index a4c243c..bc28a10 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -281,6 +281,7 @@  relocate_entry_cpu(struct drm_i915_gem_object *obj,
 	}
 
 	kunmap_atomic(vaddr);
+	obj->dirty = 1;
 
 	return 0;
 }
@@ -372,6 +373,7 @@  relocate_entry_clflush(struct drm_i915_gem_object *obj,
 	}
 
 	kunmap_atomic(vaddr);
+	obj->dirty = 1;
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index 5026a62..dd1976c 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -144,6 +144,7 @@  static int render_state_setup(struct render_state *so)
 	so->aux_batch_size = ALIGN(so->aux_batch_size, 8);
 
 	kunmap(page);
+	so->obj->dirty = 1;
 
 	ret = i915_gem_object_set_to_gtt_domain(so->obj, false);
 	if (ret)
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index a057cbd..b4a99a2 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -583,6 +583,7 @@  static void lr_context_update(struct drm_i915_gem_request *rq)
 	reg_state[CTX_RING_BUFFER_START+1] = i915_gem_obj_ggtt_offset(rb_obj);
 
 	kunmap_atomic(reg_state);
+	ctx_obj->dirty = 1;
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 4ebafab..bc77794 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -391,6 +391,7 @@  static int execlists_update_context(struct drm_i915_gem_request *rq)
 	}
 
 	kunmap_atomic(reg_state);
+	ctx_obj->dirty = 1;
 
 	return 0;
 }
@@ -1030,7 +1031,7 @@  static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
 	if (ret)
 		goto unpin_ctx_obj;
 
-	ctx_obj->dirty = true;
+	ctx_obj->dirty = 1;
 
 	/* Invalidate GuC TLB. */
 	if (i915.enable_guc_submission)
@@ -1461,6 +1462,8 @@  static int intel_init_workaround_bb(struct intel_engine_cs *ring)
 
 out:
 	kunmap_atomic(batch);
+	wa_ctx->obj->dirty = 1;
+
 	if (ret)
 		lrc_destroy_wa_ctx_obj(ring);
 
@@ -2536,6 +2539,7 @@  void intel_lr_context_reset(struct drm_device *dev,
 		reg_state[CTX_RING_TAIL+1] = 0;
 
 		kunmap_atomic(reg_state);
+		ctx_obj->dirty = 1;
 
 		ringbuf->head = 0;
 		ringbuf->tail = 0;

Comments

On Tue, Dec 01, 2015 at 12:42:02PM +0000, Dave Gordon wrote:
> In various places, one or more pages of a GEM object are mapped into CPU
> address space and updated. In each such case, the object should be
> marked dirty, to ensure that the modifications are not discarded if the
> object is evicted under memory pressure.
> 
> This is similar to commit
> 	commit 51bc140431e233284660b1d22c47dec9ecdb521e
> 	Author: Chris Wilson <chris@chris-wilson.co.uk>
> 	Date:   Mon Aug 31 15:10:39 2015 +0100
> 	drm/i915: Always mark the object as dirty when used by the GPU
> 
> in which Chris ensured that updates by the GPU were not lost due to
> eviction, but this patch applies instead to the multiple places where
> object content is updated by the host CPU.

Apart from that commit was to mask userspace bugs, here we are under
control of when the pages are marked and have chosen a different
per-page interface for CPU writes as opposed to per-object.
-Chris
On 01/12/15 13:04, Chris Wilson wrote:
> On Tue, Dec 01, 2015 at 12:42:02PM +0000, Dave Gordon wrote:
>> In various places, one or more pages of a GEM object are mapped into CPU
>> address space and updated. In each such case, the object should be
>> marked dirty, to ensure that the modifications are not discarded if the
>> object is evicted under memory pressure.
>>
>> This is similar to commit
>> 	commit 51bc140431e233284660b1d22c47dec9ecdb521e
>> 	Author: Chris Wilson <chris@chris-wilson.co.uk>
>> 	Date:   Mon Aug 31 15:10:39 2015 +0100
>> 	drm/i915: Always mark the object as dirty when used by the GPU
>>
>> in which Chris ensured that updates by the GPU were not lost due to
>> eviction, but this patch applies instead to the multiple places where
>> object content is updated by the host CPU.
>
> Apart from that commit was to mask userspace bugs, here we are under
> control of when the pages are marked and have chosen a different
> per-page interface for CPU writes as opposed to per-object.
> -Chris
>

The pattern
	get_pages();
	kmap(get_page())
	write
	kunmap()
occurs often enough that it might be worth providing a common function 
to do that and mark only the specific page dirty (other cases touch the 
whole object, so for those we can just set the obj->dirty flag and let 
put_pages() take care of propagating that to all the individual pages).

But can we be sure that all the functions touched by this patch will 
operate only on regular (default) GEM objects (i.e. not phys, stolen, 
etc) 'cos some of those don't support per-page tracking. What about 
objects with no backing store -- can/should we mark those as dirty 
(which would prevent eviction)?

.Dave.
On Tue, Dec 01, 2015 at 01:21:07PM +0000, Dave Gordon wrote:
> On 01/12/15 13:04, Chris Wilson wrote:
> >On Tue, Dec 01, 2015 at 12:42:02PM +0000, Dave Gordon wrote:
> >>In various places, one or more pages of a GEM object are mapped into CPU
> >>address space and updated. In each such case, the object should be
> >>marked dirty, to ensure that the modifications are not discarded if the
> >>object is evicted under memory pressure.
> >>
> >>This is similar to commit
> >>	commit 51bc140431e233284660b1d22c47dec9ecdb521e
> >>	Author: Chris Wilson <chris@chris-wilson.co.uk>
> >>	Date:   Mon Aug 31 15:10:39 2015 +0100
> >>	drm/i915: Always mark the object as dirty when used by the GPU
> >>
> >>in which Chris ensured that updates by the GPU were not lost due to
> >>eviction, but this patch applies instead to the multiple places where
> >>object content is updated by the host CPU.
> >
> >Apart from that commit was to mask userspace bugs, here we are under
> >control of when the pages are marked and have chosen a different
> >per-page interface for CPU writes as opposed to per-object.
> >-Chris
> >
> 
> The pattern
> 	get_pages();
> 	kmap(get_page())
> 	write
> 	kunmap()
> occurs often enough that it might be worth providing a common function to do
> that and mark only the specific page dirty (other cases touch the whole
> object, so for those we can just set the obj->dirty flag and let put_pages()
> take care of propagating that to all the individual pages).
> 
> But can we be sure that all the functions touched by this patch will operate
> only on regular (default) GEM objects (i.e. not phys, stolen, etc) 'cos some
> of those don't support per-page tracking. What about objects with no backing
> store -- can/should we mark those as dirty (which would prevent eviction)?

I thought our special objects do clear obj->dirty on put_pages? Can you
please elaborate on your concern?

While we discuss all this: A patch at the end to document dirty (maybe
even as a first stab at kerneldoc for i915_drm_gem_buffer_object) would be
awesome.
-Daniel
On 04/12/15 09:57, Daniel Vetter wrote:
> On Tue, Dec 01, 2015 at 01:21:07PM +0000, Dave Gordon wrote:
>> On 01/12/15 13:04, Chris Wilson wrote:
>>> On Tue, Dec 01, 2015 at 12:42:02PM +0000, Dave Gordon wrote:
>>>> In various places, one or more pages of a GEM object are mapped into CPU
>>>> address space and updated. In each such case, the object should be
>>>> marked dirty, to ensure that the modifications are not discarded if the
>>>> object is evicted under memory pressure.
>>>>
>>>> This is similar to commit
>>>> 	commit 51bc140431e233284660b1d22c47dec9ecdb521e
>>>> 	Author: Chris Wilson <chris@chris-wilson.co.uk>
>>>> 	Date:   Mon Aug 31 15:10:39 2015 +0100
>>>> 	drm/i915: Always mark the object as dirty when used by the GPU
>>>>
>>>> in which Chris ensured that updates by the GPU were not lost due to
>>>> eviction, but this patch applies instead to the multiple places where
>>>> object content is updated by the host CPU.
>>>
>>> Apart from that commit was to mask userspace bugs, here we are under
>>> control of when the pages are marked and have chosen a different
>>> per-page interface for CPU writes as opposed to per-object.
>>> -Chris
>>
>> The pattern
>> 	get_pages();
>> 	kmap(get_page())
>> 	write
>> 	kunmap()
>> occurs often enough that it might be worth providing a common function to do
>> that and mark only the specific page dirty (other cases touch the whole
>> object, so for those we can just set the obj->dirty flag and let put_pages()
>> take care of propagating that to all the individual pages).
>>
>> But can we be sure that all the functions touched by this patch will operate
>> only on regular (default) GEM objects (i.e. not phys, stolen, etc) 'cos some
>> of those don't support per-page tracking. What about objects with no backing
>> store -- can/should we mark those as dirty (which would prevent eviction)?
>
> I thought our special objects do clear obj->dirty on put_pages? Can you
> please elaborate on your concern?
>
> While we discuss all this: A patch at the end to document dirty (maybe
> even as a first stab at kerneldoc for i915_drm_gem_buffer_object) would be
> awesome.
> -Daniel

In general, obj->dirty means that (some or) all the pages of the object 
(may) have been modified since last time the object was read from 
backing store, and that the modified data should be written back rather 
than discarded.

Code that works only on default (gtt) GEM objects may be able to 
optimise writebacks by marking individual pages dirty, rather than the 
object as a whole. But not every GEM object has backing store, and even 
among those that do, some do not support per-page dirty tracking.

These are the GEM objects we may want to consider:

1. Default (gtt) object
    * Discontiguous, lives in page cache while pinned during use
    * Backed by shmfs (swap)
    * put_pages() transfers dirty status from object to each page
      before release
    * shmfs ensures that dirty unpinned pages are written out
      before deallocation
    * Could optimise by marking individual pages at point of use,
      rather than marking whole object and then pushing to all pages
      during put_pages()

2. Phys GEM object
    * Lives in physically-contiguous system memory, pinned during use
    * Backed by shmfs
    * if obj->dirty, put_pages() *copies* all pages back to shmfs via
      page cache RMW
    * No per-page tracking, cannot optimise

3. Stolen GEM object
    * Lives in (physically-contiguous) stolen memory, always pinned
    * No backing store!
    * obj->dirty is irrelevant (ignored)
    * put_pages() only called at end-of-life
    * No per-page tracking (not meaningful anyway)

4. Userptr GEM object
    * Discontiguous, lives in page cache while pinned during use
    * Backed by user process memory (which may then map to some
      arbitrary file mapping?)
    * put_pages() transfers dirty status from object to each page
      before release
    * dirty pages are still resident in user space, can be swapped
      out when not pinned
    * Could optimise by marking individual pages at point of use,
      rather than marking whole object and then pushing to all pages
      during put_pages()

Are there any more?

Given this diversity, it may be worth adding a dirty_page() vfunc, so 
that for those situations where a single page is dirtied AND the object 
type supports per-page tracking, we can take advantage of this to reduce 
copying. For objects that don't support per-page tracking, the 
implementation would just set obj->dirty.

For example:
     void (*dirty_page)(obj, pageno);
possibly with the additional semantic that pageno == -1 means 'dirty the 
whole object'.

A convenient further facility would be:
     struct page *i915_gem_object_get_dirty_page(obj, pageno);
which is just like i915_gem_object_get_page() but with the additional 
effect of marking the returned page dirty (by calling the above vfunc).
[Aside: can we call set_page_dirty() on a non-shmfs-backed page?].

This means that in all the places where I added 'obj->dirty = 1' after a 
kunmap() call, we would instead just change the earlier get_page() to 
get_dirty_page() instead, which provides better layering.

Together these changes mean that obj->dirty would then be a purely 
private member for use by implementations of get_pages/put_pages().

Opinions?

.Dave.
On Fri, Dec 04, 2015 at 05:28:29PM +0000, Dave Gordon wrote:
> On 04/12/15 09:57, Daniel Vetter wrote:
> >On Tue, Dec 01, 2015 at 01:21:07PM +0000, Dave Gordon wrote:
> >>On 01/12/15 13:04, Chris Wilson wrote:
> >>>On Tue, Dec 01, 2015 at 12:42:02PM +0000, Dave Gordon wrote:
> >>>>In various places, one or more pages of a GEM object are mapped into CPU
> >>>>address space and updated. In each such case, the object should be
> >>>>marked dirty, to ensure that the modifications are not discarded if the
> >>>>object is evicted under memory pressure.
> >>>>
> >>>>This is similar to commit
> >>>>	commit 51bc140431e233284660b1d22c47dec9ecdb521e
> >>>>	Author: Chris Wilson <chris@chris-wilson.co.uk>
> >>>>	Date:   Mon Aug 31 15:10:39 2015 +0100
> >>>>	drm/i915: Always mark the object as dirty when used by the GPU
> >>>>
> >>>>in which Chris ensured that updates by the GPU were not lost due to
> >>>>eviction, but this patch applies instead to the multiple places where
> >>>>object content is updated by the host CPU.
> >>>
> >>>Apart from that commit was to mask userspace bugs, here we are under
> >>>control of when the pages are marked and have chosen a different
> >>>per-page interface for CPU writes as opposed to per-object.
> >>>-Chris
> >>
> >>The pattern
> >>	get_pages();
> >>	kmap(get_page())
> >>	write
> >>	kunmap()
> >>occurs often enough that it might be worth providing a common function to do
> >>that and mark only the specific page dirty (other cases touch the whole
> >>object, so for those we can just set the obj->dirty flag and let put_pages()
> >>take care of propagating that to all the individual pages).
> >>
> >>But can we be sure that all the functions touched by this patch will operate
> >>only on regular (default) GEM objects (i.e. not phys, stolen, etc) 'cos some
> >>of those don't support per-page tracking. What about objects with no backing
> >>store -- can/should we mark those as dirty (which would prevent eviction)?
> >
> >I thought our special objects do clear obj->dirty on put_pages? Can you
> >please elaborate on your concern?
> >
> >While we discuss all this: A patch at the end to document dirty (maybe
> >even as a first stab at kerneldoc for i915_drm_gem_buffer_object) would be
> >awesome.
> >-Daniel
> 
> In general, obj->dirty means that (some or) all the pages of the object
> (may) have been modified since last time the object was read from backing
> store, and that the modified data should be written back rather than
> discarded.
> 
> Code that works only on default (gtt) GEM objects may be able to optimise
> writebacks by marking individual pages dirty, rather than the object as a
> whole. But not every GEM object has backing store, and even among those that
> do, some do not support per-page dirty tracking.
> 
> These are the GEM objects we may want to consider:
> 
> 1. Default (gtt) object
>    * Discontiguous, lives in page cache while pinned during use
>    * Backed by shmfs (swap)
>    * put_pages() transfers dirty status from object to each page
>      before release
>    * shmfs ensures that dirty unpinned pages are written out
>      before deallocation
>    * Could optimise by marking individual pages at point of use,
>      rather than marking whole object and then pushing to all pages
>      during put_pages()
> 
> 2. Phys GEM object
>    * Lives in physically-contiguous system memory, pinned during use
>    * Backed by shmfs
>    * if obj->dirty, put_pages() *copies* all pages back to shmfs via
>      page cache RMW
>    * No per-page tracking, cannot optimise
> 
> 3. Stolen GEM object
>    * Lives in (physically-contiguous) stolen memory, always pinned
>    * No backing store!
>    * obj->dirty is irrelevant (ignored)
>    * put_pages() only called at end-of-life
>    * No per-page tracking (not meaningful anyway)
> 
> 4. Userptr GEM object
>    * Discontiguous, lives in page cache while pinned during use
>    * Backed by user process memory (which may then map to some
>      arbitrary file mapping?)
>    * put_pages() transfers dirty status from object to each page
>      before release
>    * dirty pages are still resident in user space, can be swapped
>      out when not pinned
>    * Could optimise by marking individual pages at point of use,
>      rather than marking whole object and then pushing to all pages
>      during put_pages()
> 
> Are there any more?
> 
> Given this diversity, it may be worth adding a dirty_page() vfunc, so that
> for those situations where a single page is dirtied AND the object type
> supports per-page tracking, we can take advantage of this to reduce copying.
> For objects that don't support per-page tracking, the implementation would
> just set obj->dirty.
> 
> For example:
>     void (*dirty_page)(obj, pageno);
> possibly with the additional semantic that pageno == -1 means 'dirty the
> whole object'.
> 
> A convenient further facility would be:
>     struct page *i915_gem_object_get_dirty_page(obj, pageno);
> which is just like i915_gem_object_get_page() but with the additional effect
> of marking the returned page dirty (by calling the above vfunc).
> [Aside: can we call set_page_dirty() on a non-shmfs-backed page?].
> 
> This means that in all the places where I added 'obj->dirty = 1' after a
> kunmap() call, we would instead just change the earlier get_page() to
> get_dirty_page() instead, which provides better layering.
> 
> Together these changes mean that obj->dirty would then be a purely private
> member for use by implementations of get_pages/put_pages().
> 
> Opinions?

Hm, I thought we've been careful with checking that an object is somehow
backed by struct pages, and only use the page-wise access if that's the
case. But looking at the execbuf relocate code we've probably already
screwed this up, or at least will when we expose stolen to userspace.
Userptr should still work (since ultimately it's struct page backed), and
phys gem object doesn't matter (if you but relocs into your cursor on
gen2-4.0 you get all the pieces). I think step one would be more nasty
test coverage, at least for the execbuf path.

The other page-wise access path seem all internal, so I'm much less
worried about those.
-Daniel
On 07/12/15 08:29, Daniel Vetter wrote:
> On Fri, Dec 04, 2015 at 05:28:29PM +0000, Dave Gordon wrote:
>> On 04/12/15 09:57, Daniel Vetter wrote:
>>> On Tue, Dec 01, 2015 at 01:21:07PM +0000, Dave Gordon wrote:
>>>> On 01/12/15 13:04, Chris Wilson wrote:
>>>>> On Tue, Dec 01, 2015 at 12:42:02PM +0000, Dave Gordon wrote:
>>>>>> In various places, one or more pages of a GEM object are mapped into CPU
>>>>>> address space and updated. In each such case, the object should be
>>>>>> marked dirty, to ensure that the modifications are not discarded if the
>>>>>> object is evicted under memory pressure.
>>>>>>
>>>>>> This is similar to commit
>>>>>> 	commit 51bc140431e233284660b1d22c47dec9ecdb521e
>>>>>> 	Author: Chris Wilson <chris@chris-wilson.co.uk>
>>>>>> 	Date:   Mon Aug 31 15:10:39 2015 +0100
>>>>>> 	drm/i915: Always mark the object as dirty when used by the GPU
>>>>>>
>>>>>> in which Chris ensured that updates by the GPU were not lost due to
>>>>>> eviction, but this patch applies instead to the multiple places where
>>>>>> object content is updated by the host CPU.
>>>>>
>>>>> Apart from that commit was to mask userspace bugs, here we are under
>>>>> control of when the pages are marked and have chosen a different
>>>>> per-page interface for CPU writes as opposed to per-object.
>>>>> -Chris
>>>>
>>>> The pattern
>>>> 	get_pages();
>>>> 	kmap(get_page())
>>>> 	write
>>>> 	kunmap()
>>>> occurs often enough that it might be worth providing a common function to do
>>>> that and mark only the specific page dirty (other cases touch the whole
>>>> object, so for those we can just set the obj->dirty flag and let put_pages()
>>>> take care of propagating that to all the individual pages).
>>>>
>>>> But can we be sure that all the functions touched by this patch will operate
>>>> only on regular (default) GEM objects (i.e. not phys, stolen, etc) 'cos some
>>>> of those don't support per-page tracking. What about objects with no backing
>>>> store -- can/should we mark those as dirty (which would prevent eviction)?
>>>
>>> I thought our special objects do clear obj->dirty on put_pages? Can you
>>> please elaborate on your concern?
>>>
>>> While we discuss all this: A patch at the end to document dirty (maybe
>>> even as a first stab at kerneldoc for i915_drm_gem_buffer_object) would be
>>> awesome.
>>> -Daniel
>>
>> In general, obj->dirty means that (some or) all the pages of the object
>> (may) have been modified since last time the object was read from backing
>> store, and that the modified data should be written back rather than
>> discarded.
>>
>> Code that works only on default (gtt) GEM objects may be able to optimise
>> writebacks by marking individual pages dirty, rather than the object as a
>> whole. But not every GEM object has backing store, and even among those that
>> do, some do not support per-page dirty tracking.
>>
>> These are the GEM objects we may want to consider:
>>
>> 1. Default (gtt) object
>>     * Discontiguous, lives in page cache while pinned during use
>>     * Backed by shmfs (swap)
>>     * put_pages() transfers dirty status from object to each page
>>       before release
>>     * shmfs ensures that dirty unpinned pages are written out
>>       before deallocation
>>     * Could optimise by marking individual pages at point of use,
>>       rather than marking whole object and then pushing to all pages
>>       during put_pages()
>>
>> 2. Phys GEM object
>>     * Lives in physically-contiguous system memory, pinned during use
>>     * Backed by shmfs
>>     * if obj->dirty, put_pages() *copies* all pages back to shmfs via
>>       page cache RMW
>>     * No per-page tracking, cannot optimise
>>
>> 3. Stolen GEM object
>>     * Lives in (physically-contiguous) stolen memory, always pinned
>>     * No backing store!
>>     * obj->dirty is irrelevant (ignored)
>>     * put_pages() only called at end-of-life
>>     * No per-page tracking (not meaningful anyway)
>>
>> 4. Userptr GEM object
>>     * Discontiguous, lives in page cache while pinned during use
>>     * Backed by user process memory (which may then map to some
>>       arbitrary file mapping?)
>>     * put_pages() transfers dirty status from object to each page
>>       before release
>>     * dirty pages are still resident in user space, can be swapped
>>       out when not pinned
>>     * Could optimise by marking individual pages at point of use,
>>       rather than marking whole object and then pushing to all pages
>>       during put_pages()
>>
>> Are there any more?
>>
>> Given this diversity, it may be worth adding a dirty_page() vfunc, so that
>> for those situations where a single page is dirtied AND the object type
>> supports per-page tracking, we can take advantage of this to reduce copying.
>> For objects that don't support per-page tracking, the implementation would
>> just set obj->dirty.
>>
>> For example:
>>      void (*dirty_page)(obj, pageno);
>> possibly with the additional semantic that pageno == -1 means 'dirty the
>> whole object'.
>>
>> A convenient further facility would be:
>>      struct page *i915_gem_object_get_dirty_page(obj, pageno);
>> which is just like i915_gem_object_get_page() but with the additional effect
>> of marking the returned page dirty (by calling the above vfunc).
>> [Aside: can we call set_page_dirty() on a non-shmfs-backed page?].
>>
>> This means that in all the places where I added 'obj->dirty = 1' after a
>> kunmap() call, we would instead just change the earlier get_page() to
>> get_dirty_page() instead, which provides better layering.
>>
>> Together these changes mean that obj->dirty would then be a purely private
>> member for use by implementations of get_pages/put_pages().
>>
>> Opinions?
>
> Hm, I thought we've been careful with checking that an object is somehow
> backed by struct pages, and only use the page-wise access if that's the
> case. But looking at the execbuf relocate code we've probably already
> screwed this up, or at least will when we expose stolen to userspace.
> Userptr should still work (since ultimately it's struct page backed), and
> phys gem object doesn't matter (if you but relocs into your cursor on
> gen2-4.0 you get all the pieces). I think step one would be more nasty
> test coverage, at least for the execbuf path.
>
> The other page-wise access path seem all internal, so I'm much less
> worried about those.
> -Daniel

So does this mean that i915_pages_create_for_stolen() isn't really doing 
what it says? After that function has been called, obj->pages is filled 
in - but is it then valid to call i915_gem_object_get_page() ?
That returns a pointer to the (preexisting) entry in the system page 
tables for the specified page, but isn't the anomalous thing about 
stolen memory the fact that the kernel doesn't know about it and doesn't 
include it in its page tables at all?

For kmap purposes, we don't really need the 'struct page' as we could 
use kmap_atomic_pfn() instead. So maybe to make stolen objects work in 
general without everything having to know they're different, we would 
need to move the kmap operation into the vfunc as well? That would mean 
the vfunc would be something like obj->kmap_page(obj, pageno, dirty) 
returning the vaddr of the mapped page.

This looks like it will need a bit more study and design so perhaps we 
could just take the quick fix of marking whole objects dirty for now 
(which will at least give *correct* behaviour) and then work out how to 
avoid marking whole objects dirty where possible.

.Dave.
On 01/12/15 12:42, Dave Gordon wrote:
> In various places, one or more pages of a GEM object are mapped into CPU
> address space and updated. In each such case, the object should be
> marked dirty, to ensure that the modifications are not discarded if the
> object is evicted under memory pressure.
>
> This is similar to commit
> 	commit 51bc140431e233284660b1d22c47dec9ecdb521e
> 	Author: Chris Wilson <chris@chris-wilson.co.uk>
> 	Date:   Mon Aug 31 15:10:39 2015 +0100
> 	drm/i915: Always mark the object as dirty when used by the GPU
>
> in which Chris ensured that updates by the GPU were not lost due to
> eviction, but this patch applies instead to the multiple places where
> object content is updated by the host CPU.
>
> It also incorporates and supercedes Alex Dai's earlier patch
> [PATCH v1] drm/i915/guc: Fix a fw content lost issue after it is evicted
>
> Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Alex Dai <yu.dai@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_cmd_parser.c       | 1 +
>   drivers/gpu/drm/i915/i915_gem.c              | 1 +
>   drivers/gpu/drm/i915/i915_gem_dmabuf.c       | 2 ++
>   drivers/gpu/drm/i915/i915_gem_execbuffer.c   | 2 ++
>   drivers/gpu/drm/i915/i915_gem_render_state.c | 1 +
>   drivers/gpu/drm/i915/i915_guc_submission.c   | 1 +
>   drivers/gpu/drm/i915/intel_lrc.c             | 6 +++++-
>   7 files changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
> index 814d894..292bd5d 100644
> --- a/drivers/gpu/drm/i915/i915_cmd_parser.c
> +++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
> @@ -945,6 +945,7 @@ static u32 *copy_batch(struct drm_i915_gem_object *dest_obj,
>   		drm_clflush_virt_range(src, batch_len);
>
>   	memcpy(dst, src, batch_len);
> +	dest_obj->dirty = 1;
>
>   unmap_src:
>   	vunmap(src_base);
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 33adc8f..76bacba 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -5201,6 +5201,7 @@ i915_gem_object_create_from_data(struct drm_device *dev,
>   	i915_gem_object_pin_pages(obj);
>   	sg = obj->pages;
>   	bytes = sg_copy_from_buffer(sg->sgl, sg->nents, (void *)data, size);
> +	obj->dirty = 1;
>   	i915_gem_object_unpin_pages(obj);
>
>   	if (WARN_ON(bytes != size)) {
> diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
> index e9c2bfd..49a74c6 100644
> --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
> @@ -208,6 +208,8 @@ static int i915_gem_begin_cpu_access(struct dma_buf *dma_buf, size_t start, size
>   		return ret;
>
>   	ret = i915_gem_object_set_to_cpu_domain(obj, write);
> +	if (write)
> +		obj->dirty = 1;
>   	mutex_unlock(&dev->struct_mutex);
>   	return ret;
>   }
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index a4c243c..bc28a10 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -281,6 +281,7 @@ relocate_entry_cpu(struct drm_i915_gem_object *obj,
>   	}
>
>   	kunmap_atomic(vaddr);
> +	obj->dirty = 1;
>
>   	return 0;
>   }
> @@ -372,6 +373,7 @@ relocate_entry_clflush(struct drm_i915_gem_object *obj,
>   	}
>
>   	kunmap_atomic(vaddr);
> +	obj->dirty = 1;
>
>   	return 0;
>   }
> diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
> index 5026a62..dd1976c 100644
> --- a/drivers/gpu/drm/i915/i915_gem_render_state.c
> +++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
> @@ -144,6 +144,7 @@ static int render_state_setup(struct render_state *so)
>   	so->aux_batch_size = ALIGN(so->aux_batch_size, 8);
>
>   	kunmap(page);
> +	so->obj->dirty = 1;
>
>   	ret = i915_gem_object_set_to_gtt_domain(so->obj, false);
>   	if (ret)
> diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
> index a057cbd..b4a99a2 100644
> --- a/drivers/gpu/drm/i915/i915_guc_submission.c
> +++ b/drivers/gpu/drm/i915/i915_guc_submission.c
> @@ -583,6 +583,7 @@ static void lr_context_update(struct drm_i915_gem_request *rq)
>   	reg_state[CTX_RING_BUFFER_START+1] = i915_gem_obj_ggtt_offset(rb_obj);
>
>   	kunmap_atomic(reg_state);
> +	ctx_obj->dirty = 1;
>   }
>
>   /**
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 4ebafab..bc77794 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -391,6 +391,7 @@ static int execlists_update_context(struct drm_i915_gem_request *rq)
>   	}
>
>   	kunmap_atomic(reg_state);
> +	ctx_obj->dirty = 1;
>
>   	return 0;
>   }
> @@ -1030,7 +1031,7 @@ static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
>   	if (ret)
>   		goto unpin_ctx_obj;
>
> -	ctx_obj->dirty = true;
> +	ctx_obj->dirty = 1;
>
>   	/* Invalidate GuC TLB. */
>   	if (i915.enable_guc_submission)
> @@ -1461,6 +1462,8 @@ static int intel_init_workaround_bb(struct intel_engine_cs *ring)
>
>   out:
>   	kunmap_atomic(batch);
> +	wa_ctx->obj->dirty = 1;
> +
>   	if (ret)
>   		lrc_destroy_wa_ctx_obj(ring);
>
> @@ -2536,6 +2539,7 @@ void intel_lr_context_reset(struct drm_device *dev,
>   		reg_state[CTX_RING_TAIL+1] = 0;
>
>   		kunmap_atomic(reg_state);
> +		ctx_obj->dirty = 1;
>
>   		ringbuf->head = 0;
>   		ringbuf->tail = 0;
>

I think I missed i915_gem_phys_pwrite().

i915_gem_gtt_pwrite_fast() marks the object dirty for most cases (vit 
set_to_gtt_domain(), but isn't called for all cases (or can return 
before the set_domain). Then we try i915_gem_shmem_pwrite() for non-phys
objects (no check for stolen!) and that already marks the object dirty 
[aside: we might be able to change that to page-by-page?], but 
i915_gem_phys_pwrite() doesn't mark the object dirty, so we might lose 
updates there?

Or maybe we should move the marking up into i915_gem_pwrite_ioctl() 
instead. The target object is surely going to be dirtied, whatever type 
it is.

.Dave.
On Mon, Dec 07, 2015 at 12:04:18PM +0000, Dave Gordon wrote:
> On 07/12/15 08:29, Daniel Vetter wrote:
> >On Fri, Dec 04, 2015 at 05:28:29PM +0000, Dave Gordon wrote:
> >>On 04/12/15 09:57, Daniel Vetter wrote:
> >>>On Tue, Dec 01, 2015 at 01:21:07PM +0000, Dave Gordon wrote:
> >>>>On 01/12/15 13:04, Chris Wilson wrote:
> >>>>>On Tue, Dec 01, 2015 at 12:42:02PM +0000, Dave Gordon wrote:
> >>>>>>In various places, one or more pages of a GEM object are mapped into CPU
> >>>>>>address space and updated. In each such case, the object should be
> >>>>>>marked dirty, to ensure that the modifications are not discarded if the
> >>>>>>object is evicted under memory pressure.
> >>>>>>
> >>>>>>This is similar to commit
> >>>>>>	commit 51bc140431e233284660b1d22c47dec9ecdb521e
> >>>>>>	Author: Chris Wilson <chris@chris-wilson.co.uk>
> >>>>>>	Date:   Mon Aug 31 15:10:39 2015 +0100
> >>>>>>	drm/i915: Always mark the object as dirty when used by the GPU
> >>>>>>
> >>>>>>in which Chris ensured that updates by the GPU were not lost due to
> >>>>>>eviction, but this patch applies instead to the multiple places where
> >>>>>>object content is updated by the host CPU.
> >>>>>
> >>>>>Apart from that commit was to mask userspace bugs, here we are under
> >>>>>control of when the pages are marked and have chosen a different
> >>>>>per-page interface for CPU writes as opposed to per-object.
> >>>>>-Chris
> >>>>
> >>>>The pattern
> >>>>	get_pages();
> >>>>	kmap(get_page())
> >>>>	write
> >>>>	kunmap()
> >>>>occurs often enough that it might be worth providing a common function to do
> >>>>that and mark only the specific page dirty (other cases touch the whole
> >>>>object, so for those we can just set the obj->dirty flag and let put_pages()
> >>>>take care of propagating that to all the individual pages).
> >>>>
> >>>>But can we be sure that all the functions touched by this patch will operate
> >>>>only on regular (default) GEM objects (i.e. not phys, stolen, etc) 'cos some
> >>>>of those don't support per-page tracking. What about objects with no backing
> >>>>store -- can/should we mark those as dirty (which would prevent eviction)?
> >>>
> >>>I thought our special objects do clear obj->dirty on put_pages? Can you
> >>>please elaborate on your concern?
> >>>
> >>>While we discuss all this: A patch at the end to document dirty (maybe
> >>>even as a first stab at kerneldoc for i915_drm_gem_buffer_object) would be
> >>>awesome.
> >>>-Daniel
> >>
> >>In general, obj->dirty means that (some or) all the pages of the object
> >>(may) have been modified since last time the object was read from backing
> >>store, and that the modified data should be written back rather than
> >>discarded.
> >>
> >>Code that works only on default (gtt) GEM objects may be able to optimise
> >>writebacks by marking individual pages dirty, rather than the object as a
> >>whole. But not every GEM object has backing store, and even among those that
> >>do, some do not support per-page dirty tracking.
> >>
> >>These are the GEM objects we may want to consider:
> >>
> >>1. Default (gtt) object
> >>    * Discontiguous, lives in page cache while pinned during use
> >>    * Backed by shmfs (swap)
> >>    * put_pages() transfers dirty status from object to each page
> >>      before release
> >>    * shmfs ensures that dirty unpinned pages are written out
> >>      before deallocation
> >>    * Could optimise by marking individual pages at point of use,
> >>      rather than marking whole object and then pushing to all pages
> >>      during put_pages()
> >>
> >>2. Phys GEM object
> >>    * Lives in physically-contiguous system memory, pinned during use
> >>    * Backed by shmfs
> >>    * if obj->dirty, put_pages() *copies* all pages back to shmfs via
> >>      page cache RMW
> >>    * No per-page tracking, cannot optimise
> >>
> >>3. Stolen GEM object
> >>    * Lives in (physically-contiguous) stolen memory, always pinned
> >>    * No backing store!
> >>    * obj->dirty is irrelevant (ignored)
> >>    * put_pages() only called at end-of-life
> >>    * No per-page tracking (not meaningful anyway)
> >>
> >>4. Userptr GEM object
> >>    * Discontiguous, lives in page cache while pinned during use
> >>    * Backed by user process memory (which may then map to some
> >>      arbitrary file mapping?)
> >>    * put_pages() transfers dirty status from object to each page
> >>      before release
> >>    * dirty pages are still resident in user space, can be swapped
> >>      out when not pinned
> >>    * Could optimise by marking individual pages at point of use,
> >>      rather than marking whole object and then pushing to all pages
> >>      during put_pages()
> >>
> >>Are there any more?
> >>
> >>Given this diversity, it may be worth adding a dirty_page() vfunc, so that
> >>for those situations where a single page is dirtied AND the object type
> >>supports per-page tracking, we can take advantage of this to reduce copying.
> >>For objects that don't support per-page tracking, the implementation would
> >>just set obj->dirty.
> >>
> >>For example:
> >>     void (*dirty_page)(obj, pageno);
> >>possibly with the additional semantic that pageno == -1 means 'dirty the
> >>whole object'.
> >>
> >>A convenient further facility would be:
> >>     struct page *i915_gem_object_get_dirty_page(obj, pageno);
> >>which is just like i915_gem_object_get_page() but with the additional effect
> >>of marking the returned page dirty (by calling the above vfunc).
> >>[Aside: can we call set_page_dirty() on a non-shmfs-backed page?].
> >>
> >>This means that in all the places where I added 'obj->dirty = 1' after a
> >>kunmap() call, we would instead just change the earlier get_page() to
> >>get_dirty_page() instead, which provides better layering.
> >>
> >>Together these changes mean that obj->dirty would then be a purely private
> >>member for use by implementations of get_pages/put_pages().
> >>
> >>Opinions?
> >
> >Hm, I thought we've been careful with checking that an object is somehow
> >backed by struct pages, and only use the page-wise access if that's the
> >case. But looking at the execbuf relocate code we've probably already
> >screwed this up, or at least will when we expose stolen to userspace.
> >Userptr should still work (since ultimately it's struct page backed), and
> >phys gem object doesn't matter (if you but relocs into your cursor on
> >gen2-4.0 you get all the pieces). I think step one would be more nasty
> >test coverage, at least for the execbuf path.
> >
> >The other page-wise access path seem all internal, so I'm much less
> >worried about those.
> >-Daniel
> 
> So does this mean that i915_pages_create_for_stolen() isn't really doing
> what it says? After that function has been called, obj->pages is filled in -
> but is it then valid to call i915_gem_object_get_page() ?
> That returns a pointer to the (preexisting) entry in the system page tables
> for the specified page, but isn't the anomalous thing about stolen memory
> the fact that the kernel doesn't know about it and doesn't include it in its
> page tables at all?

We use sg tables as a dual-purpose thing, both to look up struct pages and
to get at the device dma address. For stolen we only fill out one side of
this though, so it's not legal to call get_page on it.

> For kmap purposes, we don't really need the 'struct page' as we could use
> kmap_atomic_pfn() instead. So maybe to make stolen objects work in general
> without everything having to know they're different, we would need to move
> the kmap operation into the vfunc as well? That would mean the vfunc would
> be something like obj->kmap_page(obj, pageno, dirty) returning the vaddr of
> the mapped page.

Because of this awesome stuff hw engineers did to implement content
protection the cpu is forbidden from accessing stolen :( Would indeed make
things simpler if we could do that - once we even considered to just give
the entire stolen range back to the linux page allocator using memory
hotplug. Unfortunately Stolen Is Special and there's no way to avoid that.

> This looks like it will need a bit more study and design so perhaps we could
> just take the quick fix of marking whole objects dirty for now (which will
> at least give *correct* behaviour) and then work out how to avoid marking
> whole objects dirty where possible.

Tbh I've lost track of the patches. I do kinda like the idea of
get_page_dirty or adding a write flag to get_page. Since get_page should
be the official API to get at pinned struct pages it should cover most. If
we still have anything left doing a raw loop over the sg table for pages,
I guess we could wrap it up in an i915 macro which takes a write boolean,
too.
-Daniel
On Mon, Dec 07, 2015 at 12:51:49PM +0000, Dave Gordon wrote:
> I think I missed i915_gem_phys_pwrite().
> 
> i915_gem_gtt_pwrite_fast() marks the object dirty for most cases (vit
> set_to_gtt_domain(), but isn't called for all cases (or can return before
> the set_domain). Then we try i915_gem_shmem_pwrite() for non-phys
> objects (no check for stolen!) and that already marks the object dirty
> [aside: we might be able to change that to page-by-page?], but
> i915_gem_phys_pwrite() doesn't mark the object dirty, so we might lose
> updates there?
> 
> Or maybe we should move the marking up into i915_gem_pwrite_ioctl() instead.
> The target object is surely going to be dirtied, whatever type it is.

phys objects are special, and when binding we create allocate new
(contiguous) storage. In put_pages_phys that gets copied back and pages
marked as dirty. While a phys object is pinned it's a kernel bug to look
at the shmem pages and a userspace bug to touch the cpu mmap (since that
data will simply be overwritten whenever the kernel feels like).

phys objects are only used for cursors on old crap though, so ok if we
don't streamline this fairly quirky old ABI.
-Daniel
On 10/12/15 08:58, Daniel Vetter wrote:
> On Mon, Dec 07, 2015 at 12:51:49PM +0000, Dave Gordon wrote:
>> I think I missed i915_gem_phys_pwrite().
>>
>> i915_gem_gtt_pwrite_fast() marks the object dirty for most cases (vit
>> set_to_gtt_domain(), but isn't called for all cases (or can return before
>> the set_domain). Then we try i915_gem_shmem_pwrite() for non-phys
>> objects (no check for stolen!) and that already marks the object dirty
>> [aside: we might be able to change that to page-by-page?], but
>> i915_gem_phys_pwrite() doesn't mark the object dirty, so we might lose
>> updates there?
>>
>> Or maybe we should move the marking up into i915_gem_pwrite_ioctl() instead.
>> The target object is surely going to be dirtied, whatever type it is.
>
> phys objects are special, and when binding we create allocate new
> (contiguous) storage. In put_pages_phys that gets copied back and pages
> marked as dirty. While a phys object is pinned it's a kernel bug to look
> at the shmem pages and a userspace bug to touch the cpu mmap (since that
> data will simply be overwritten whenever the kernel feels like).
>
> phys objects are only used for cursors on old crap though, so ok if we
> don't streamline this fairly quirky old ABI.
> -Daniel

So is pread broken already for 'phys' ? In the pwrite code, we have 
i915_gem_phys_pwrite() which look OK, but there isn't a corresponding 
i915_gem_phys_pread(), instead it will call i915_gem_shmem_pread(), and 
I'm not sure that will work! The question being, does the kernel have 
page table slots corresponding to the DMA area allocated, otherwise
the for_each_sg_page()/sg_page_iter_page() in i915_gem_shmem_pread() 
isn't going to give meaningful results. And I found this comment in 
drm_pci_alloc() (called from i915_gem_object_attach_phys()):

         /* XXX - Is virt_to_page() legal for consistent mem? */
         /* Reserve */
         for (addr = (unsigned long)dmah->vaddr, sz = size;
              sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) {
                 SetPageReserved(virt_to_page((void *)addr));
         }

(and does it depend on which memory configuration is selected?).

See also current thread on "Support for pread/pwrite from/to non shmem 
backed objects" ...

.Dave.
On Fri, Dec 11, 2015 at 12:19:09PM +0000, Dave Gordon wrote:
> On 10/12/15 08:58, Daniel Vetter wrote:
> >On Mon, Dec 07, 2015 at 12:51:49PM +0000, Dave Gordon wrote:
> >>I think I missed i915_gem_phys_pwrite().
> >>
> >>i915_gem_gtt_pwrite_fast() marks the object dirty for most cases (vit
> >>set_to_gtt_domain(), but isn't called for all cases (or can return before
> >>the set_domain). Then we try i915_gem_shmem_pwrite() for non-phys
> >>objects (no check for stolen!) and that already marks the object dirty
> >>[aside: we might be able to change that to page-by-page?], but
> >>i915_gem_phys_pwrite() doesn't mark the object dirty, so we might lose
> >>updates there?
> >>
> >>Or maybe we should move the marking up into i915_gem_pwrite_ioctl() instead.
> >>The target object is surely going to be dirtied, whatever type it is.
> >
> >phys objects are special, and when binding we create allocate new
> >(contiguous) storage. In put_pages_phys that gets copied back and pages
> >marked as dirty. While a phys object is pinned it's a kernel bug to look
> >at the shmem pages and a userspace bug to touch the cpu mmap (since that
> >data will simply be overwritten whenever the kernel feels like).
> >
> >phys objects are only used for cursors on old crap though, so ok if we
> >don't streamline this fairly quirky old ABI.
> >-Daniel
> 
> So is pread broken already for 'phys' ?

Yes. A completely unused corner of the API.
-Chris
On Fri, Dec 11, 2015 at 12:29:40PM +0000, Chris Wilson wrote:
> On Fri, Dec 11, 2015 at 12:19:09PM +0000, Dave Gordon wrote:
> > On 10/12/15 08:58, Daniel Vetter wrote:
> > >On Mon, Dec 07, 2015 at 12:51:49PM +0000, Dave Gordon wrote:
> > >>I think I missed i915_gem_phys_pwrite().
> > >>
> > >>i915_gem_gtt_pwrite_fast() marks the object dirty for most cases (vit
> > >>set_to_gtt_domain(), but isn't called for all cases (or can return before
> > >>the set_domain). Then we try i915_gem_shmem_pwrite() for non-phys
> > >>objects (no check for stolen!) and that already marks the object dirty
> > >>[aside: we might be able to change that to page-by-page?], but
> > >>i915_gem_phys_pwrite() doesn't mark the object dirty, so we might lose
> > >>updates there?
> > >>
> > >>Or maybe we should move the marking up into i915_gem_pwrite_ioctl() instead.
> > >>The target object is surely going to be dirtied, whatever type it is.
> > >
> > >phys objects are special, and when binding we create allocate new
> > >(contiguous) storage. In put_pages_phys that gets copied back and pages
> > >marked as dirty. While a phys object is pinned it's a kernel bug to look
> > >at the shmem pages and a userspace bug to touch the cpu mmap (since that
> > >data will simply be overwritten whenever the kernel feels like).
> > >
> > >phys objects are only used for cursors on old crap though, so ok if we
> > >don't streamline this fairly quirky old ABI.
> > >-Daniel
> > 
> > So is pread broken already for 'phys' ?
> 
> Yes. A completely unused corner of the API.

I think it would be useful to extract all the phys object stuff into
i915_gem_phys_obj.c, add minimal kerneldoc for the functions, and then an
overview section which explains in detail how fucked up this little bit of
ABI history lore is. I can do the overview section, but the
extraction/basic kerneldoc will probably take a bit longer to get around
to.
-Daniel