[Mesa-dev,5/7] mesa: add support for AMD_blend_minmax_factor

Submitted by Maxence Le Doré on Jan. 3, 2014, 1:18 a.m.

Details

Message ID 1388711906-4910-5-git-send-email-maxence.ledore@gmail.com
State New
Headers show

Not browsing as part of any series.

Commit Message

Maxence Le Doré Jan. 3, 2014, 1:18 a.m.
---
 src/mesa/main/blend.c      | 3 +++
 src/mesa/main/extensions.c | 1 +
 src/mesa/main/mtypes.h     | 1 +
 3 files changed, 5 insertions(+)

Patch hide | download patch | download mbox

diff --git a/src/mesa/main/blend.c b/src/mesa/main/blend.c
index 9e11ca7..4995143 100644
--- a/src/mesa/main/blend.c
+++ b/src/mesa/main/blend.c
@@ -326,6 +326,9 @@  legal_blend_equation(const struct gl_context *ctx, GLenum mode)
    case GL_MIN:
    case GL_MAX:
       return ctx->Extensions.EXT_blend_minmax;
+   case GL_FACTOR_MIN_AMD:
+   case GL_FACTOR_MAX_AMD:
+      return ctx->Extensions.AMD_blend_minmax_factor;
    default:
       return GL_FALSE;
    }
diff --git a/src/mesa/main/extensions.c b/src/mesa/main/extensions.c
index f0e1858..b46c788 100644
--- a/src/mesa/main/extensions.c
+++ b/src/mesa/main/extensions.c
@@ -299,6 +299,7 @@  static const struct extension extension_table[] = {
 
    /* Vendor extensions */
    { "GL_3DFX_texture_compression_FXT1",           o(TDFX_texture_compression_FXT1),           GL,             1999 },
+   { "GL_AMD_blend_minmax_factor",                 o(AMD_blend_minmax_factor),                 GL,             2009 },
    { "GL_AMD_conservative_depth",                  o(ARB_conservative_depth),                  GL,             2009 },
    { "GL_AMD_draw_buffers_blend",                  o(ARB_draw_buffers_blend),                  GL,             2009 },
    { "GL_AMD_performance_monitor",                 o(AMD_performance_monitor),                 GL,             2007 },
diff --git a/src/mesa/main/mtypes.h b/src/mesa/main/mtypes.h
index f93bb56..4081e4e 100644
--- a/src/mesa/main/mtypes.h
+++ b/src/mesa/main/mtypes.h
@@ -3433,6 +3433,7 @@  struct gl_extensions
    GLboolean EXT_vertex_array_bgra;
    GLboolean OES_standard_derivatives;
    /* vendor extensions */
+   GLboolean AMD_blend_minmax_factor;
    GLboolean AMD_performance_monitor;
    GLboolean AMD_seamless_cubemap_per_texture;
    GLboolean AMD_vertex_shader_layer;

Comments

Am 03.01.2014 02:18, schrieb Maxence Le Doré:
> ---
>  src/mesa/main/blend.c      | 3 +++
>  src/mesa/main/extensions.c | 1 +
>  src/mesa/main/mtypes.h     | 1 +
>  3 files changed, 5 insertions(+)
> 
> diff --git a/src/mesa/main/blend.c b/src/mesa/main/blend.c
> index 9e11ca7..4995143 100644
> --- a/src/mesa/main/blend.c
> +++ b/src/mesa/main/blend.c
> @@ -326,6 +326,9 @@ legal_blend_equation(const struct gl_context *ctx, GLenum mode)
>     case GL_MIN:
>     case GL_MAX:
>        return ctx->Extensions.EXT_blend_minmax;
> +   case GL_FACTOR_MIN_AMD:
> +   case GL_FACTOR_MAX_AMD:
> +      return ctx->Extensions.AMD_blend_minmax_factor;
>     default:
>        return GL_FALSE;
>     }
> diff --git a/src/mesa/main/extensions.c b/src/mesa/main/extensions.c
> index f0e1858..b46c788 100644
> --- a/src/mesa/main/extensions.c
> +++ b/src/mesa/main/extensions.c
> @@ -299,6 +299,7 @@ static const struct extension extension_table[] = {
>  
>     /* Vendor extensions */
>     { "GL_3DFX_texture_compression_FXT1",           o(TDFX_texture_compression_FXT1),           GL,             1999 },
> +   { "GL_AMD_blend_minmax_factor",                 o(AMD_blend_minmax_factor),                 GL,             2009 },
>     { "GL_AMD_conservative_depth",                  o(ARB_conservative_depth),                  GL,             2009 },
>     { "GL_AMD_draw_buffers_blend",                  o(ARB_draw_buffers_blend),                  GL,             2009 },
>     { "GL_AMD_performance_monitor",                 o(AMD_performance_monitor),                 GL,             2007 },
> diff --git a/src/mesa/main/mtypes.h b/src/mesa/main/mtypes.h
> index f93bb56..4081e4e 100644
> --- a/src/mesa/main/mtypes.h
> +++ b/src/mesa/main/mtypes.h
> @@ -3433,6 +3433,7 @@ struct gl_extensions
>     GLboolean EXT_vertex_array_bgra;
>     GLboolean OES_standard_derivatives;
>     /* vendor extensions */
> +   GLboolean AMD_blend_minmax_factor;
>     GLboolean AMD_performance_monitor;
>     GLboolean AMD_seamless_cubemap_per_texture;
>     GLboolean AMD_vertex_shader_layer;
> 

Where did you get the 2009 year from? The earliest I can find is 2010.
Also, I think it would be nice if there'd be some test (piglit) for this.
And could this be enabled for gallium drivers? Right now the state
tracker translates away the blend factors for min/max as the gallium
interface already could handle this extension without any effort. That
said, I'm not sure if all drivers can handle it (nvidia in particular),
since afair d3d (9 and 10) also require blend factors to be ignored
hence it is indeed possible not everyone can do it. In this case a cap
bit would be required.

Roland
- You're right ! I've just checked spec and initial draft is from
march 2010. 2009 is a mistake. Thanks.
- You're the third one to request piglit tests. I'll have to do it. Or
I could not complain if some devs become angry against me for sending
patches and never with at least a pinch of according update to piglit
test suite. I found to
About the fact that some gallium driven gpu may not support factored
min/max blending, I supposed it initialy and, indeed, added a cap
enum. But when I saw the way min/max blending what implemented in
gallium for all drivers, I supposed it was ok (A piglit test for
EXT_blend_minmax is in the codebase since 2004. If this piglit test
would failed with some gallium drivers, I assumed this problem would
have been already fixed on the gallium side). Of course I should have
already checked this by myself instead of suppose it.
- For the facts that d3d 9/10 require blend factors to be ignored, I
even didn't know. And this sound important indeed.

So I'm going to correct this, but first I'll sent one or more piglit
tests to the appropriate dev mailing list.
I found two interesting model to write them in the piglit code base :
tests/general/blendminmax.c
tests/general/blendsquare.c


2014/1/3 Roland Scheidegger <sroland@vmware.com>:
> Am 03.01.2014 02:18, schrieb Maxence Le Doré:
>> ---
>>  src/mesa/main/blend.c      | 3 +++
>>  src/mesa/main/extensions.c | 1 +
>>  src/mesa/main/mtypes.h     | 1 +
>>  3 files changed, 5 insertions(+)
>>
>> diff --git a/src/mesa/main/blend.c b/src/mesa/main/blend.c
>> index 9e11ca7..4995143 100644
>> --- a/src/mesa/main/blend.c
>> +++ b/src/mesa/main/blend.c
>> @@ -326,6 +326,9 @@ legal_blend_equation(const struct gl_context *ctx, GLenum mode)
>>     case GL_MIN:
>>     case GL_MAX:
>>        return ctx->Extensions.EXT_blend_minmax;
>> +   case GL_FACTOR_MIN_AMD:
>> +   case GL_FACTOR_MAX_AMD:
>> +      return ctx->Extensions.AMD_blend_minmax_factor;
>>     default:
>>        return GL_FALSE;
>>     }
>> diff --git a/src/mesa/main/extensions.c b/src/mesa/main/extensions.c
>> index f0e1858..b46c788 100644
>> --- a/src/mesa/main/extensions.c
>> +++ b/src/mesa/main/extensions.c
>> @@ -299,6 +299,7 @@ static const struct extension extension_table[] = {
>>
>>     /* Vendor extensions */
>>     { "GL_3DFX_texture_compression_FXT1",           o(TDFX_texture_compression_FXT1),           GL,             1999 },
>> +   { "GL_AMD_blend_minmax_factor",                 o(AMD_blend_minmax_factor),                 GL,             2009 },
>>     { "GL_AMD_conservative_depth",                  o(ARB_conservative_depth),                  GL,             2009 },
>>     { "GL_AMD_draw_buffers_blend",                  o(ARB_draw_buffers_blend),                  GL,             2009 },
>>     { "GL_AMD_performance_monitor",                 o(AMD_performance_monitor),                 GL,             2007 },
>> diff --git a/src/mesa/main/mtypes.h b/src/mesa/main/mtypes.h
>> index f93bb56..4081e4e 100644
>> --- a/src/mesa/main/mtypes.h
>> +++ b/src/mesa/main/mtypes.h
>> @@ -3433,6 +3433,7 @@ struct gl_extensions
>>     GLboolean EXT_vertex_array_bgra;
>>     GLboolean OES_standard_derivatives;
>>     /* vendor extensions */
>> +   GLboolean AMD_blend_minmax_factor;
>>     GLboolean AMD_performance_monitor;
>>     GLboolean AMD_seamless_cubemap_per_texture;
>>     GLboolean AMD_vertex_shader_layer;
>>
>
> Where did you get the 2009 year from? The earliest I can find is 2010.
> Also, I think it would be nice if there'd be some test (piglit) for this.
> And could this be enabled for gallium drivers? Right now the state
> tracker translates away the blend factors for min/max as the gallium
> interface already could handle this extension without any effort. That
> said, I'm not sure if all drivers can handle it (nvidia in particular),
> since afair d3d (9 and 10) also require blend factors to be ignored
> hence it is indeed possible not everyone can do it. In this case a cap
> bit would be required.
>
> Roland
Am 03.01.2014 16:51, schrieb Maxence Le Doré:
> - You're right ! I've just checked spec and initial draft is from
> march 2010. 2009 is a mistake. Thanks.
> - You're the third one to request piglit tests. I'll have to do it. Or
> I could not complain if some devs become angry against me for sending
> patches and never with at least a pinch of according update to piglit
> test suite. I found to
> About the fact that some gallium driven gpu may not support factored
> min/max blending, I supposed it initialy and, indeed, added a cap
> enum. But when I saw the way min/max blending what implemented in
> gallium for all drivers, I supposed it was ok (A piglit test for
> EXT_blend_minmax is in the codebase since 2004. If this piglit test
> would failed with some gallium drivers, I assumed this problem would
> have been already fixed on the gallium side). Of course I should have
> already checked this by myself instead of suppose it.
Well yes in theory everything should just work with gallium, as the
state tracker was required to translate blend factors away without this
extension. However it is still possible some hw simply ignores the blend
factors when min/max is used. Certainly though that's something a piglit
test would show.
Though actually since d3d required blend factors to be ignored too for
min/max I'm pretty sure the svga driver can't handle this, so a cap bit
would be required in any case.

> - For the facts that d3d 9/10 require blend factors to be ignored, I
> even didn't know. And this sound important indeed.
> 
> So I'm going to correct this, but first I'll sent one or more piglit
> tests to the appropriate dev mailing list.
> I found two interesting model to write them in the piglit code base :
> tests/general/blendminmax.c
> tests/general/blendsquare.c

I guess a simple test would do without really needing to be exhaustive,
as long as the expected result is different than what you'd get by
ignoring blend factors with MIN/MAX equation.

Roland

> 
> 2014/1/3 Roland Scheidegger <sroland@vmware.com>:
>> Am 03.01.2014 02:18, schrieb Maxence Le Doré:
>>> ---
>>>  src/mesa/main/blend.c      | 3 +++
>>>  src/mesa/main/extensions.c | 1 +
>>>  src/mesa/main/mtypes.h     | 1 +
>>>  3 files changed, 5 insertions(+)
>>>
>>> diff --git a/src/mesa/main/blend.c b/src/mesa/main/blend.c
>>> index 9e11ca7..4995143 100644
>>> --- a/src/mesa/main/blend.c
>>> +++ b/src/mesa/main/blend.c
>>> @@ -326,6 +326,9 @@ legal_blend_equation(const struct gl_context *ctx, GLenum mode)
>>>     case GL_MIN:
>>>     case GL_MAX:
>>>        return ctx->Extensions.EXT_blend_minmax;
>>> +   case GL_FACTOR_MIN_AMD:
>>> +   case GL_FACTOR_MAX_AMD:
>>> +      return ctx->Extensions.AMD_blend_minmax_factor;
>>>     default:
>>>        return GL_FALSE;
>>>     }
>>> diff --git a/src/mesa/main/extensions.c b/src/mesa/main/extensions.c
>>> index f0e1858..b46c788 100644
>>> --- a/src/mesa/main/extensions.c
>>> +++ b/src/mesa/main/extensions.c
>>> @@ -299,6 +299,7 @@ static const struct extension extension_table[] = {
>>>
>>>     /* Vendor extensions */
>>>     { "GL_3DFX_texture_compression_FXT1",           o(TDFX_texture_compression_FXT1),           GL,             1999 },
>>> +   { "GL_AMD_blend_minmax_factor",                 o(AMD_blend_minmax_factor),                 GL,             2009 },
>>>     { "GL_AMD_conservative_depth",                  o(ARB_conservative_depth),                  GL,             2009 },
>>>     { "GL_AMD_draw_buffers_blend",                  o(ARB_draw_buffers_blend),                  GL,             2009 },
>>>     { "GL_AMD_performance_monitor",                 o(AMD_performance_monitor),                 GL,             2007 },
>>> diff --git a/src/mesa/main/mtypes.h b/src/mesa/main/mtypes.h
>>> index f93bb56..4081e4e 100644
>>> --- a/src/mesa/main/mtypes.h
>>> +++ b/src/mesa/main/mtypes.h
>>> @@ -3433,6 +3433,7 @@ struct gl_extensions
>>>     GLboolean EXT_vertex_array_bgra;
>>>     GLboolean OES_standard_derivatives;
>>>     /* vendor extensions */
>>> +   GLboolean AMD_blend_minmax_factor;
>>>     GLboolean AMD_performance_monitor;
>>>     GLboolean AMD_seamless_cubemap_per_texture;
>>>     GLboolean AMD_vertex_shader_layer;
>>>
>>
>> Where did you get the 2009 year from? The earliest I can find is 2010.
>> Also, I think it would be nice if there'd be some test (piglit) for this.
>> And could this be enabled for gallium drivers? Right now the state
>> tracker translates away the blend factors for min/max as the gallium
>> interface already could handle this extension without any effort. That
>> said, I'm not sure if all drivers can handle it (nvidia in particular),
>> since afair d3d (9 and 10) also require blend factors to be ignored
>> hence it is indeed possible not everyone can do it. In this case a cap
>> bit would be required.
>>
>> Roland
Am 03.01.2014 16:22, schrieb Roland Scheidegger:
> Am 03.01.2014 02:18, schrieb Maxence Le Doré:
>> ---
>>  src/mesa/main/blend.c      | 3 +++
>>  src/mesa/main/extensions.c | 1 +
>>  src/mesa/main/mtypes.h     | 1 +
>>  3 files changed, 5 insertions(+)
>>
>> diff --git a/src/mesa/main/blend.c b/src/mesa/main/blend.c
>> index 9e11ca7..4995143 100644
>> --- a/src/mesa/main/blend.c
>> +++ b/src/mesa/main/blend.c
>> @@ -326,6 +326,9 @@ legal_blend_equation(const struct gl_context *ctx, GLenum mode)
>>     case GL_MIN:
>>     case GL_MAX:
>>        return ctx->Extensions.EXT_blend_minmax;
>> +   case GL_FACTOR_MIN_AMD:
>> +   case GL_FACTOR_MAX_AMD:
>> +      return ctx->Extensions.AMD_blend_minmax_factor;
>>     default:
>>        return GL_FALSE;
>>     }
>> diff --git a/src/mesa/main/extensions.c b/src/mesa/main/extensions.c
>> index f0e1858..b46c788 100644
>> --- a/src/mesa/main/extensions.c
>> +++ b/src/mesa/main/extensions.c
>> @@ -299,6 +299,7 @@ static const struct extension extension_table[] = {
>>  
>>     /* Vendor extensions */
>>     { "GL_3DFX_texture_compression_FXT1",           o(TDFX_texture_compression_FXT1),           GL,             1999 },
>> +   { "GL_AMD_blend_minmax_factor",                 o(AMD_blend_minmax_factor),                 GL,             2009 },
>>     { "GL_AMD_conservative_depth",                  o(ARB_conservative_depth),                  GL,             2009 },
>>     { "GL_AMD_draw_buffers_blend",                  o(ARB_draw_buffers_blend),                  GL,             2009 },
>>     { "GL_AMD_performance_monitor",                 o(AMD_performance_monitor),                 GL,             2007 },
>> diff --git a/src/mesa/main/mtypes.h b/src/mesa/main/mtypes.h
>> index f93bb56..4081e4e 100644
>> --- a/src/mesa/main/mtypes.h
>> +++ b/src/mesa/main/mtypes.h
>> @@ -3433,6 +3433,7 @@ struct gl_extensions
>>     GLboolean EXT_vertex_array_bgra;
>>     GLboolean OES_standard_derivatives;
>>     /* vendor extensions */
>> +   GLboolean AMD_blend_minmax_factor;
>>     GLboolean AMD_performance_monitor;
>>     GLboolean AMD_seamless_cubemap_per_texture;
>>     GLboolean AMD_vertex_shader_layer;
>>
> 
> Where did you get the 2009 year from? The earliest I can find is 2010.
> Also, I think it would be nice if there'd be some test (piglit) for this.


> And could this be enabled for gallium drivers? Right now the state
> tracker translates away the blend factors for min/max as the gallium
> interface already could handle this extension without any effort. That
> said, I'm not sure if all drivers can handle it (nvidia in particular),
> since afair d3d (9 and 10) also require blend factors to be ignored
> hence it is indeed possible not everyone can do it. In this case a cap
> bit would be required.
Oh sorry this didn't really make much sense the way it's worded, as you
enabled it for gallium drivers (missed this due to an aggressive spam
filter...).
I've got another minor complaint for 6/7 as the commit summary is wrong
(ARB_blend_minmax_factor).

Roland