[1/2] cl: Update integer limit tests to detect incorrect storage sizes

Submitted by Aaron Watry on Sept. 10, 2015, 3:12 p.m.

Details

Message ID 1441897970-4005-1-git-send-email-awatry@gmail.com
State New
Headers show

Not browsing as part of any series.

Commit Message

Aaron Watry Sept. 10, 2015, 3:12 p.m.
The tests for the char/short/integer/long minimums do not properly
check that the value is stored in the correct type.  E.g. (-32768)
actually gets parsed as an int by the preprocessor, and INT_MIN is
actually stored as a long.

By subtracting a vector with value of 0 from the given defined *_MIN
and then grabbing the first element of the resulting vector, we can
make sure that the values are actually stored in the correct type.

Reported-By: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
Signed-off-by: Aaron Watry <awatry@gmail.com>
---
 tests/cl/program/execute/int-definitions.cl | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Patch hide | download patch | download mbox

diff --git a/tests/cl/program/execute/int-definitions.cl b/tests/cl/program/execute/int-definitions.cl
index 011599d..3d8ee63 100644
--- a/tests/cl/program/execute/int-definitions.cl
+++ b/tests/cl/program/execute/int-definitions.cl
@@ -36,29 +36,29 @@  kernel void test_char(global int* out) {
   int i = 0;
   out[i++] = CHAR_BIT;
   out[i++] = CHAR_MAX;
-  out[i++] = CHAR_MIN;
+  out[i++] = (CHAR_MIN - (char2)(0)).s0;
   out[i++] = SCHAR_MAX;
-  out[i++] = SCHAR_MIN;
+  out[i++] = (SCHAR_MIN - (char2)(0)).s0;
   out[i++] = UCHAR_MAX;
 }
 
 kernel void test_short(global int* out) {
   int i = 0;
   out[i++] = SHRT_MAX;
-  out[i++] = SHRT_MIN;
+  out[i++] = (SHRT_MIN - (short2)(0)).s0;
   out[i++] = USHRT_MAX;
 }
 
 kernel void test_int(global int* out) {
   int i = 0;
   out[i++] = INT_MAX;
-  out[i++] = INT_MIN;
+  out[i++] = (INT_MIN - (int2)(0)).s0;
   out[i++] = UINT_MAX;
 }
 
 kernel void test_long(global long* out) {
   int i = 0;
   out[i++] = LONG_MAX;
-  out[i++] = LONG_MIN;
+  out[i++] = (LONG_MIN - (long2)(0)).s0;
   out[i++] = ULONG_MAX;
 }

Comments

On Thursday 10 September 2015 10:12:49 Aaron Watry wrote:
> The tests for the char/short/integer/long minimums do not properly
> check that the value is stored in the correct type.  E.g. (-32768)
> actually gets parsed as an int by the preprocessor, and INT_MIN is
> actually stored as a long.
> 
> By subtracting a vector with value of 0 from the given defined *_MIN
> and then grabbing the first element of the resulting vector, we can
> make sure that the values are actually stored in the correct type.

It puzzle me why a vector operation will raise an error. It's because CL is 
more strict than C
Could you please add something like that in the commit message (as explain in 
the llvm thread):

According to chapter 6.2.6  "Usual Arithmetic Conversions"
"An error shall occur if any scalar operand has greater rank than the type of 
the vector element."

In any case, the series is
Reviewed-by Serge Martin <edb+piglit@sigluy.net>

> 
> Reported-By: Moritz Pflanzer <moritz.pflanzer14@imperial.ac.uk>
> Signed-off-by: Aaron Watry <awatry@gmail.com>
> ---
>  tests/cl/program/execute/int-definitions.cl | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/tests/cl/program/execute/int-definitions.cl
> b/tests/cl/program/execute/int-definitions.cl index 011599d..3d8ee63 100644
> --- a/tests/cl/program/execute/int-definitions.cl
> +++ b/tests/cl/program/execute/int-definitions.cl
> @@ -36,29 +36,29 @@ kernel void test_char(global int* out) {
>    int i = 0;
>    out[i++] = CHAR_BIT;
>    out[i++] = CHAR_MAX;
> -  out[i++] = CHAR_MIN;
> +  out[i++] = (CHAR_MIN - (char2)(0)).s0;
>    out[i++] = SCHAR_MAX;
> -  out[i++] = SCHAR_MIN;
> +  out[i++] = (SCHAR_MIN - (char2)(0)).s0;
>    out[i++] = UCHAR_MAX;
>  }
> 
>  kernel void test_short(global int* out) {
>    int i = 0;
>    out[i++] = SHRT_MAX;
> -  out[i++] = SHRT_MIN;
> +  out[i++] = (SHRT_MIN - (short2)(0)).s0;
>    out[i++] = USHRT_MAX;
>  }
> 
>  kernel void test_int(global int* out) {
>    int i = 0;
>    out[i++] = INT_MAX;
> -  out[i++] = INT_MIN;
> +  out[i++] = (INT_MIN - (int2)(0)).s0;
>    out[i++] = UINT_MAX;
>  }
> 
>  kernel void test_long(global long* out) {
>    int i = 0;
>    out[i++] = LONG_MAX;
> -  out[i++] = LONG_MIN;
> +  out[i++] = (LONG_MIN - (long2)(0)).s0;
>    out[i++] = ULONG_MAX;
>  }