[3/4] arb_texture_view: Pass correct layer index to shader

Submitted by Chris Forbes on Jan. 12, 2014, 12:39 a.m.

Details

Message ID 1389487143-3010-3-git-send-email-chrisf@ijw.co.nz
State New
Headers show

Not browsing as part of any series.

Commit Message

Chris Forbes Jan. 12, 2014, 12:39 a.m.
We want to sample from the last layer in the view, but we were actually
trying to sample from a layer beyond that [by the texture's minimum
layer].

This might have worked on hardware where the sampler clamps the layer
index before sampling [NV?], but that relies on undefined behavior.

Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
---
 tests/spec/arb_texture_view/rendering_levels.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Patch hide | download patch | download mbox

diff --git a/tests/spec/arb_texture_view/rendering_levels.c b/tests/spec/arb_texture_view/rendering_levels.c
index ef29f78..86bf055 100644
--- a/tests/spec/arb_texture_view/rendering_levels.c
+++ b/tests/spec/arb_texture_view/rendering_levels.c
@@ -205,7 +205,7 @@  test_render_layers(void)
 		glClear(GL_COLOR_BUFFER_BIT);
 
 		expectedLayer = l + numLayers[l] - 1;
-		draw_3d_depth(-1.0, -1.0, 2.0, 2.0, expectedLayer);
+		draw_3d_depth(-1.0, -1.0, 2.0, 2.0, numLayers[l] - 1);
 
 		expected[0] = Colors[expectedLayer][0] / 255.0;
 		expected[1] = Colors[expectedLayer][1] / 255.0;

Comments

see inline
On 01/11/2014 05:39 PM, Chris Forbes wrote:
> We want to sample from the last layer in the view, but we were actually
> trying to sample from a layer beyond that [by the texture's minimum
> layer].
>
> This might have worked on hardware where the sampler clamps the layer
> index before sampling [NV?], but that relies on undefined behavior.
>
> Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
> ---
>   tests/spec/arb_texture_view/rendering_levels.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tests/spec/arb_texture_view/rendering_levels.c b/tests/spec/arb_texture_view/rendering_levels.c
> index ef29f78..86bf055 100644
> --- a/tests/spec/arb_texture_view/rendering_levels.c
> +++ b/tests/spec/arb_texture_view/rendering_levels.c
> @@ -205,7 +205,7 @@ test_render_layers(void)
>   		glClear(GL_COLOR_BUFFER_BIT);
>   
>   		expectedLayer = l + numLayers[l] - 1;
> -		draw_3d_depth(-1.0, -1.0, 2.0, 2.0, expectedLayer);
> +		draw_3d_depth(-1.0, -1.0, 2.0, 2.0, numLayers[l] - 1);
>   
with this change  then the draw depth is inconsistent with the indexing 
into the Colors array
how about this:
expectedLayer = numLayers[l] - 1;

>   		expected[0] = Colors[expectedLayer][0] / 255.0;
>   		expected[1] = Colors[expectedLayer][1] / 255.0;
Could do that, but one side or the other *must* account for the view
having a nonzero MinLayer.

Colors[] is indexed based on the layer index relative to the
underlying texture; expectedLayer is currently relative to the
underlying texture; the value used in the shader needs to be relative
to the view [so expectedLayer - view's MinLayer]

Did this actually work on nvidia on master?


On Wed, Jan 15, 2014 at 5:36 AM, Jon Ashburn <jon@lunarg.com> wrote:
> see inline
>
> On 01/11/2014 05:39 PM, Chris Forbes wrote:
>>
>> We want to sample from the last layer in the view, but we were actually
>> trying to sample from a layer beyond that [by the texture's minimum
>> layer].
>>
>> This might have worked on hardware where the sampler clamps the layer
>> index before sampling [NV?], but that relies on undefined behavior.
>>
>> Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
>> ---
>>   tests/spec/arb_texture_view/rendering_levels.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/tests/spec/arb_texture_view/rendering_levels.c
>> b/tests/spec/arb_texture_view/rendering_levels.c
>> index ef29f78..86bf055 100644
>> --- a/tests/spec/arb_texture_view/rendering_levels.c
>> +++ b/tests/spec/arb_texture_view/rendering_levels.c
>> @@ -205,7 +205,7 @@ test_render_layers(void)
>>                 glClear(GL_COLOR_BUFFER_BIT);
>>                 expectedLayer = l + numLayers[l] - 1;
>> -               draw_3d_depth(-1.0, -1.0, 2.0, 2.0, expectedLayer);
>> +               draw_3d_depth(-1.0, -1.0, 2.0, 2.0, numLayers[l] - 1);
>>
>
> with this change  then the draw depth is inconsistent with the indexing into
> the Colors array
> how about this:
> expectedLayer = numLayers[l] - 1;
>
>
>>                 expected[0] = Colors[expectedLayer][0] / 255.0;
>>                 expected[1] = Colors[expectedLayer][1] / 255.0;
>
>
> _______________________________________________
> Piglit mailing list
> Piglit@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/piglit
I decided to go back to the spec and verify some of my assumptions --
turns out I was wrong about undefined results from out of range layer
indexes -- I had got the rules for sampling array textures confused
with the rules for layered framebuffer attachments.

From the GL4 core profile spec, 3.8 Texturing, Coordinate Wrapping and
Texel Selection, on p212 [228 of the PDF]:

    For one- and two-dimensional array textures, the texel is obtained
from image layer l, where

      l = clamp(floor(t + 0.5), 0, h_t-1), for one-dimensional array textures,
          clamp(floor(r + 0.5), 0, d_t-1), for two-dimensional array textures.

So your test should work, but relies on this.

I'll drop this patch, but also add some additional cases which try to
sample from a layer within the view, rather than always exercising
clamping.

-- Chris

On Wed, Jan 15, 2014 at 8:06 AM, Chris Forbes <chrisf@ijw.co.nz> wrote:
> Could do that, but one side or the other *must* account for the view
> having a nonzero MinLayer.
>
> Colors[] is indexed based on the layer index relative to the
> underlying texture; expectedLayer is currently relative to the
> underlying texture; the value used in the shader needs to be relative
> to the view [so expectedLayer - view's MinLayer]
>
> Did this actually work on nvidia on master?
>
>
> On Wed, Jan 15, 2014 at 5:36 AM, Jon Ashburn <jon@lunarg.com> wrote:
>> see inline
>>
>> On 01/11/2014 05:39 PM, Chris Forbes wrote:
>>>
>>> We want to sample from the last layer in the view, but we were actually
>>> trying to sample from a layer beyond that [by the texture's minimum
>>> layer].
>>>
>>> This might have worked on hardware where the sampler clamps the layer
>>> index before sampling [NV?], but that relies on undefined behavior.
>>>
>>> Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
>>> ---
>>>   tests/spec/arb_texture_view/rendering_levels.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/tests/spec/arb_texture_view/rendering_levels.c
>>> b/tests/spec/arb_texture_view/rendering_levels.c
>>> index ef29f78..86bf055 100644
>>> --- a/tests/spec/arb_texture_view/rendering_levels.c
>>> +++ b/tests/spec/arb_texture_view/rendering_levels.c
>>> @@ -205,7 +205,7 @@ test_render_layers(void)
>>>                 glClear(GL_COLOR_BUFFER_BIT);
>>>                 expectedLayer = l + numLayers[l] - 1;
>>> -               draw_3d_depth(-1.0, -1.0, 2.0, 2.0, expectedLayer);
>>> +               draw_3d_depth(-1.0, -1.0, 2.0, 2.0, numLayers[l] - 1);
>>>
>>
>> with this change  then the draw depth is inconsistent with the indexing into
>> the Colors array
>> how about this:
>> expectedLayer = numLayers[l] - 1;
>>
>>
>>>                 expected[0] = Colors[expectedLayer][0] / 255.0;
>>>                 expected[1] = Colors[expectedLayer][1] / 255.0;
>>
>>
>> _______________________________________________
>> Piglit mailing list
>> Piglit@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/piglit
Okay I misunderstood that the shader access was relative to the view.
Looks like your patch is a good bug fix.

My original test code on master branch did work on Nvidia.

Jon
On 01/14/2014 12:06 PM, Chris Forbes wrote:
> Could do that, but one side or the other *must* account for the view
> having a nonzero MinLayer.
>
> Colors[] is indexed based on the layer index relative to the
> underlying texture; expectedLayer is currently relative to the
> underlying texture; the value used in the shader needs to be relative
> to the view [so expectedLayer - view's MinLayer]
>
> Did this actually work on nvidia on master?
>
>
> On Wed, Jan 15, 2014 at 5:36 AM, Jon Ashburn <jon@lunarg.com> wrote:
>> see inline
>>
>> On 01/11/2014 05:39 PM, Chris Forbes wrote:
>>> We want to sample from the last layer in the view, but we were actually
>>> trying to sample from a layer beyond that [by the texture's minimum
>>> layer].
>>>
>>> This might have worked on hardware where the sampler clamps the layer
>>> index before sampling [NV?], but that relies on undefined behavior.
>>>
>>> Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
>>> ---
>>>    tests/spec/arb_texture_view/rendering_levels.c | 2 +-
>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/tests/spec/arb_texture_view/rendering_levels.c
>>> b/tests/spec/arb_texture_view/rendering_levels.c
>>> index ef29f78..86bf055 100644
>>> --- a/tests/spec/arb_texture_view/rendering_levels.c
>>> +++ b/tests/spec/arb_texture_view/rendering_levels.c
>>> @@ -205,7 +205,7 @@ test_render_layers(void)
>>>                  glClear(GL_COLOR_BUFFER_BIT);
>>>                  expectedLayer = l + numLayers[l] - 1;
>>> -               draw_3d_depth(-1.0, -1.0, 2.0, 2.0, expectedLayer);
>>> +               draw_3d_depth(-1.0, -1.0, 2.0, 2.0, numLayers[l] - 1);
>>>
>> with this change  then the draw depth is inconsistent with the indexing into
>> the Colors array
>> how about this:
>> expectedLayer = numLayers[l] - 1;
>>
>>
>>>                  expected[0] = Colors[expectedLayer][0] / 255.0;
>>>                  expected[1] = Colors[expectedLayer][1] / 255.0;
>>
>> _______________________________________________
>> Piglit mailing list
>> Piglit@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/piglit