[3/3] framework: Exit if a filter removes all tests from a profile

Submitted by Dylan Baker on April 12, 2017, 5:55 p.m.

Details

Message ID 20170412175506.18695-3-dylan@pnwbakers.com
State New
Headers show
Series "Series without cover letter" ( rev: 1 ) in Piglit

Not browsing as part of any series.

Commit Message

Dylan Baker April 12, 2017, 5:55 p.m.
It doesn't makes sense to run if a user has removed all tests from a
selected profile, and currently if all tests are removed, then an
assertion will be hit in the backend that isn't extremely clear about
what went wrong. This should be much easier to understand.

Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
---
 framework/profile.py | 7 +++++++
 1 file changed, 7 insertions(+)

Patch hide | download patch | download mbox

diff --git a/framework/profile.py b/framework/profile.py
index 4604367e1..ce0b24ce8 100644
--- a/framework/profile.py
+++ b/framework/profile.py
@@ -389,6 +389,13 @@  def run(profiles, logger, backend, concurrency):
     profiles = [(p, list(p.itertests())) for p in profiles]
     log = LogManager(logger, sum(len(l) for _, l in profiles))
 
+    # check that after the filters are run there are actually tests to run
+    for p, l in profiles:
+        if not l:
+            raise exceptions.PiglitUserError(
+                'After running filters there are no tests in '
+                'profile "{}"'.format(p.name))
+
     def test(name, test, profile, this_pool=None):
         """Function to call test.execute from map"""
         with backend.write_test(name) as w:

Comments

On 04/12/2017 11:55 AM, Dylan Baker wrote:
> It doesn't makes sense to run if a user has removed all tests from a
> selected profile, and currently if all tests are removed, then an
> assertion will be hit in the backend that isn't extremely clear about
> what went wrong. This should be much easier to understand.
>
> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
> ---
>   framework/profile.py | 7 +++++++
>   1 file changed, 7 insertions(+)
>
> diff --git a/framework/profile.py b/framework/profile.py
> index 4604367e1..ce0b24ce8 100644
> --- a/framework/profile.py
> +++ b/framework/profile.py
> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, concurrency):
>       profiles = [(p, list(p.itertests())) for p in profiles]
>       log = LogManager(logger, sum(len(l) for _, l in profiles))
>
> +    # check that after the filters are run there are actually tests to run
> +    for p, l in profiles:
> +        if not l:
> +            raise exceptions.PiglitUserError(
> +                'After running filters there are no tests in '
> +                'profile "{}"'.format(p.name))

How about "no matching tests"?

Otherwise, works great.  Thanks for the fast work!

Reviewed-by: Brian Paul <brianp@vmware.com>
Tested-by: Brian Paul <brianp@vmware.com>

> +
>       def test(name, test, profile, this_pool=None):
>           """Function to call test.execute from map"""
>           with backend.write_test(name) as w:
>
Quoting Brian Paul (2017-04-12 13:04:59)
> On 04/12/2017 11:55 AM, Dylan Baker wrote:
> > It doesn't makes sense to run if a user has removed all tests from a
> > selected profile, and currently if all tests are removed, then an
> > assertion will be hit in the backend that isn't extremely clear about
> > what went wrong. This should be much easier to understand.
> >
> > Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
> > ---
> >   framework/profile.py | 7 +++++++
> >   1 file changed, 7 insertions(+)
> >
> > diff --git a/framework/profile.py b/framework/profile.py
> > index 4604367e1..ce0b24ce8 100644
> > --- a/framework/profile.py
> > +++ b/framework/profile.py
> > @@ -389,6 +389,13 @@ def run(profiles, logger, backend, concurrency):
> >       profiles = [(p, list(p.itertests())) for p in profiles]
> >       log = LogManager(logger, sum(len(l) for _, l in profiles))
> >
> > +    # check that after the filters are run there are actually tests to run
> > +    for p, l in profiles:
> > +        if not l:
> > +            raise exceptions.PiglitUserError(
> > +                'After running filters there are no tests in '
> > +                'profile "{}"'.format(p.name))
> 
> How about "no matching tests"?
> 
> Otherwise, works great.  Thanks for the fast work!
> 
> Reviewed-by: Brian Paul <brianp@vmware.com>
> Tested-by: Brian Paul <brianp@vmware.com>
> 

Thanks! These are pushed to master

> > +
> >       def test(name, test, profile, this_pool=None):
> >           """Function to call test.execute from map"""
> >           with backend.write_test(name) as w:
> >
>
On 04/13/2017 09:46 PM, Dylan Baker wrote:
> Quoting Brian Paul (2017-04-12 13:04:59)
>> On 04/12/2017 11:55 AM, Dylan Baker wrote:
>>> It doesn't makes sense to run if a user has removed all tests from a
>>> selected profile, and currently if all tests are removed, then an
>>> assertion will be hit in the backend that isn't extremely clear about
>>> what went wrong. This should be much easier to understand.
>>>
>>> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
>>> ---
>>>    framework/profile.py | 7 +++++++
>>>    1 file changed, 7 insertions(+)
>>>
>>> diff --git a/framework/profile.py b/framework/profile.py
>>> index 4604367e1..ce0b24ce8 100644
>>> --- a/framework/profile.py
>>> +++ b/framework/profile.py
>>> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, concurrency):
>>>        profiles = [(p, list(p.itertests())) for p in profiles]
>>>        log = LogManager(logger, sum(len(l) for _, l in profiles))
>>>
>>> +    # check that after the filters are run there are actually tests to run
>>> +    for p, l in profiles:
>>> +        if not l:
>>> +            raise exceptions.PiglitUserError(
>>> +                'After running filters there are no tests in '
>>> +                'profile "{}"'.format(p.name))
>>

Bumped into an issue caused by this commit with IGT tests.

When the last test of a run never finishes and you attempt to

   piglit resume -n test-results-path

this exception is raised instead of the expected result of finishing up 
the run.
Quoting Petri Latvala (2017-06-20 05:41:11)
> On 04/13/2017 09:46 PM, Dylan Baker wrote:
> > Quoting Brian Paul (2017-04-12 13:04:59)
> >> On 04/12/2017 11:55 AM, Dylan Baker wrote:
> >>> It doesn't makes sense to run if a user has removed all tests from a
> >>> selected profile, and currently if all tests are removed, then an
> >>> assertion will be hit in the backend that isn't extremely clear about
> >>> what went wrong. This should be much easier to understand.
> >>>
> >>> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
> >>> ---
> >>>    framework/profile.py | 7 +++++++
> >>>    1 file changed, 7 insertions(+)
> >>>
> >>> diff --git a/framework/profile.py b/framework/profile.py
> >>> index 4604367e1..ce0b24ce8 100644
> >>> --- a/framework/profile.py
> >>> +++ b/framework/profile.py
> >>> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, concurrency):
> >>>        profiles = [(p, list(p.itertests())) for p in profiles]
> >>>        log = LogManager(logger, sum(len(l) for _, l in profiles))
> >>>
> >>> +    # check that after the filters are run there are actually tests to run
> >>> +    for p, l in profiles:
> >>> +        if not l:
> >>> +            raise exceptions.PiglitUserError(
> >>> +                'After running filters there are no tests in '
> >>> +                'profile "{}"'.format(p.name))
> >>
> 
> Bumped into an issue caused by this commit with IGT tests.
> 
> When the last test of a run never finishes and you attempt to
> 
>    piglit resume -n test-results-path
> 
> this exception is raised instead of the expected result of finishing up 
> the run.
> 
> 
> -- 
> Petri Latvala
> 

That is expected with the command line you've supplied.

The -n/--no-retry option instructs piglit to not retry tests that started but
didn't finish, if you get the same behavior without the -n option that is a bug.

Dylan
On 06/20/2017 08:59 PM, Dylan Baker wrote:
> Quoting Petri Latvala (2017-06-20 05:41:11)
>> On 04/13/2017 09:46 PM, Dylan Baker wrote:
>>> Quoting Brian Paul (2017-04-12 13:04:59)
>>>> On 04/12/2017 11:55 AM, Dylan Baker wrote:
>>>>> It doesn't makes sense to run if a user has removed all tests from a
>>>>> selected profile, and currently if all tests are removed, then an
>>>>> assertion will be hit in the backend that isn't extremely clear about
>>>>> what went wrong. This should be much easier to understand.
>>>>>
>>>>> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
>>>>> ---
>>>>>     framework/profile.py | 7 +++++++
>>>>>     1 file changed, 7 insertions(+)
>>>>>
>>>>> diff --git a/framework/profile.py b/framework/profile.py
>>>>> index 4604367e1..ce0b24ce8 100644
>>>>> --- a/framework/profile.py
>>>>> +++ b/framework/profile.py
>>>>> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, concurrency):
>>>>>         profiles = [(p, list(p.itertests())) for p in profiles]
>>>>>         log = LogManager(logger, sum(len(l) for _, l in profiles))
>>>>>
>>>>> +    # check that after the filters are run there are actually tests to run
>>>>> +    for p, l in profiles:
>>>>> +        if not l:
>>>>> +            raise exceptions.PiglitUserError(
>>>>> +                'After running filters there are no tests in '
>>>>> +                'profile "{}"'.format(p.name))
>> Bumped into an issue caused by this commit with IGT tests.
>>
>> When the last test of a run never finishes and you attempt to
>>
>>     piglit resume -n test-results-path
>>
>> this exception is raised instead of the expected result of finishing up
>> the run.
>>
>>
>> -- 
>> Petri Latvala
>>
> That is expected with the command line you've supplied.
>
> The -n/--no-retry option instructs piglit to not retry tests that started but
> didn't finish, if you get the same behavior without the -n option that is a bug.
>
> Dylan


What is the supported method then to loop resume until results.json.bz2 
gets generated? Previous behaviour was that piglit reported that there's 
nothing more to do and generated that.
Quoting Petri Latvala (2017-06-21 01:37:21)
> On 06/20/2017 08:59 PM, Dylan Baker wrote:
> > Quoting Petri Latvala (2017-06-20 05:41:11)
> >> On 04/13/2017 09:46 PM, Dylan Baker wrote:
> >>> Quoting Brian Paul (2017-04-12 13:04:59)
> >>>> On 04/12/2017 11:55 AM, Dylan Baker wrote:
> >>>>> It doesn't makes sense to run if a user has removed all tests from a
> >>>>> selected profile, and currently if all tests are removed, then an
> >>>>> assertion will be hit in the backend that isn't extremely clear about
> >>>>> what went wrong. This should be much easier to understand.
> >>>>>
> >>>>> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
> >>>>> ---
> >>>>>     framework/profile.py | 7 +++++++
> >>>>>     1 file changed, 7 insertions(+)
> >>>>>
> >>>>> diff --git a/framework/profile.py b/framework/profile.py
> >>>>> index 4604367e1..ce0b24ce8 100644
> >>>>> --- a/framework/profile.py
> >>>>> +++ b/framework/profile.py
> >>>>> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, concurrency):
> >>>>>         profiles = [(p, list(p.itertests())) for p in profiles]
> >>>>>         log = LogManager(logger, sum(len(l) for _, l in profiles))
> >>>>>
> >>>>> +    # check that after the filters are run there are actually tests to run
> >>>>> +    for p, l in profiles:
> >>>>> +        if not l:
> >>>>> +            raise exceptions.PiglitUserError(
> >>>>> +                'After running filters there are no tests in '
> >>>>> +                'profile "{}"'.format(p.name))
> >> Bumped into an issue caused by this commit with IGT tests.
> >>
> >> When the last test of a run never finishes and you attempt to
> >>
> >>     piglit resume -n test-results-path
> >>
> >> this exception is raised instead of the expected result of finishing up
> >> the run.
> >>
> >>
> >> -- 
> >> Petri Latvala
> >>
> > That is expected with the command line you've supplied.
> >
> > The -n/--no-retry option instructs piglit to not retry tests that started but
> > didn't finish, if you get the same behavior without the -n option that is a bug.
> >
> > Dylan
> 
> 
> What is the supported method then to loop resume until results.json.bz2 
> gets generated? Previous behaviour was that piglit reported that there's 
> nothing more to do and generated that.
> 
> 
> -- 
> Petri Latvala
> 

I would use `piglit summary aggregate` to build the results.json.foo file.
On 22/06/17 22:12, Dylan Baker wrote:
> Quoting Petri Latvala (2017-06-21 01:37:21)
>> On 06/20/2017 08:59 PM, Dylan Baker wrote:
>>> Quoting Petri Latvala (2017-06-20 05:41:11)
>>>> On 04/13/2017 09:46 PM, Dylan Baker wrote:
>>>>> Quoting Brian Paul (2017-04-12 13:04:59)
>>>>>> On 04/12/2017 11:55 AM, Dylan Baker wrote:
>>>>>>> It doesn't makes sense to run if a user has removed all tests from a
>>>>>>> selected profile, and currently if all tests are removed, then an
>>>>>>> assertion will be hit in the backend that isn't extremely clear about
>>>>>>> what went wrong. This should be much easier to understand.
>>>>>>>
>>>>>>> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
>>>>>>> ---
>>>>>>>      framework/profile.py | 7 +++++++
>>>>>>>      1 file changed, 7 insertions(+)
>>>>>>>
>>>>>>> diff --git a/framework/profile.py b/framework/profile.py
>>>>>>> index 4604367e1..ce0b24ce8 100644
>>>>>>> --- a/framework/profile.py
>>>>>>> +++ b/framework/profile.py
>>>>>>> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, concurrency):
>>>>>>>          profiles = [(p, list(p.itertests())) for p in profiles]
>>>>>>>          log = LogManager(logger, sum(len(l) for _, l in profiles))
>>>>>>>
>>>>>>> +    # check that after the filters are run there are actually tests to run
>>>>>>> +    for p, l in profiles:
>>>>>>> +        if not l:
>>>>>>> +            raise exceptions.PiglitUserError(
>>>>>>> +                'After running filters there are no tests in '
>>>>>>> +                'profile "{}"'.format(p.name))
>>>> Bumped into an issue caused by this commit with IGT tests.
>>>>
>>>> When the last test of a run never finishes and you attempt to
>>>>
>>>>      piglit resume -n test-results-path
>>>>
>>>> this exception is raised instead of the expected result of finishing up
>>>> the run.
>>>>
>>>>
>>>> -- 
>>>> Petri Latvala
>>>>
>>> That is expected with the command line you've supplied.
>>>
>>> The -n/--no-retry option instructs piglit to not retry tests that started but
>>> didn't finish, if you get the same behavior without the -n option that is a bug.
>>>
>>> Dylan
>>
>>
>> What is the supported method then to loop resume until results.json.bz2
>> gets generated? Previous behaviour was that piglit reported that there's
>> nothing more to do and generated that.
>>
>>
>> -- 
>> Petri Latvala
>>
> 
> I would use `piglit summary aggregate` to build the results.json.foo file.

Hi Dylan,

Petri is on vacation for some time, so let me provide a little more 
background on why this patch is problematic for our use-case.

For automated IGT testing, we have a testlist and reboot the machine as 
soon as we get a certain error condition or if the test takes more than 
a specified timeout, which reboots the machine immediately (watchdog).

Once the machine has rebooted, we re-deploy the right kernel and resume 
piglit testing, using the -n option. When the testing is over, piglit 
generates the results.json.foo file and returns 0. At this point, 
ezbench considers the execution as done and does its own things.

However, since this patch got introduced, if the last test got 
interrupted, then piglit will raise the PiglitUserError exception which 
is hard to interpret from outside as a way to say that all the tests 
have been run and that we should be generating the report. This also 
introduces an ill-tested codepath, so Petri and I were wondering why not 
make it less confusing for the user by just realizing that there are no 
tests left to run, and just generate the report at this point.

Any suggestion as to what we should be doing? Maybe we could only send 
this error when no tests have been run already and there are no runs to 
be run?

Thanks,
Martin
Quoting Martin Peres (2017-06-27 03:38:33)
> On 22/06/17 22:12, Dylan Baker wrote:
> > Quoting Petri Latvala (2017-06-21 01:37:21)
> >> On 06/20/2017 08:59 PM, Dylan Baker wrote:
> >>> Quoting Petri Latvala (2017-06-20 05:41:11)
> >>>> On 04/13/2017 09:46 PM, Dylan Baker wrote:
> >>>>> Quoting Brian Paul (2017-04-12 13:04:59)
> >>>>>> On 04/12/2017 11:55 AM, Dylan Baker wrote:
> >>>>>>> It doesn't makes sense to run if a user has removed all tests from a
> >>>>>>> selected profile, and currently if all tests are removed, then an
> >>>>>>> assertion will be hit in the backend that isn't extremely clear about
> >>>>>>> what went wrong. This should be much easier to understand.
> >>>>>>>
> >>>>>>> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
> >>>>>>> ---
> >>>>>>>      framework/profile.py | 7 +++++++
> >>>>>>>      1 file changed, 7 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/framework/profile.py b/framework/profile.py
> >>>>>>> index 4604367e1..ce0b24ce8 100644
> >>>>>>> --- a/framework/profile.py
> >>>>>>> +++ b/framework/profile.py
> >>>>>>> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, concurrency):
> >>>>>>>          profiles = [(p, list(p.itertests())) for p in profiles]
> >>>>>>>          log = LogManager(logger, sum(len(l) for _, l in profiles))
> >>>>>>>
> >>>>>>> +    # check that after the filters are run there are actually tests to run
> >>>>>>> +    for p, l in profiles:
> >>>>>>> +        if not l:
> >>>>>>> +            raise exceptions.PiglitUserError(
> >>>>>>> +                'After running filters there are no tests in '
> >>>>>>> +                'profile "{}"'.format(p.name))
> >>>> Bumped into an issue caused by this commit with IGT tests.
> >>>>
> >>>> When the last test of a run never finishes and you attempt to
> >>>>
> >>>>      piglit resume -n test-results-path
> >>>>
> >>>> this exception is raised instead of the expected result of finishing up
> >>>> the run.
> >>>>
> >>>>
> >>>> -- 
> >>>> Petri Latvala
> >>>>
> >>> That is expected with the command line you've supplied.
> >>>
> >>> The -n/--no-retry option instructs piglit to not retry tests that started but
> >>> didn't finish, if you get the same behavior without the -n option that is a bug.
> >>>
> >>> Dylan
> >>
> >>
> >> What is the supported method then to loop resume until results.json.bz2
> >> gets generated? Previous behaviour was that piglit reported that there's
> >> nothing more to do and generated that.
> >>
> >>
> >> -- 
> >> Petri Latvala
> >>
> > 
> > I would use `piglit summary aggregate` to build the results.json.foo file.
> 
> Hi Dylan,
> 
> Petri is on vacation for some time, so let me provide a little more 
> background on why this patch is problematic for our use-case.
> 
> For automated IGT testing, we have a testlist and reboot the machine as 
> soon as we get a certain error condition or if the test takes more than 
> a specified timeout, which reboots the machine immediately (watchdog).
> 
> Once the machine has rebooted, we re-deploy the right kernel and resume 
> piglit testing, using the -n option. When the testing is over, piglit 
> generates the results.json.foo file and returns 0. At this point, 
> ezbench considers the execution as done and does its own things.
> 
> However, since this patch got introduced, if the last test got 
> interrupted, then piglit will raise the PiglitUserError exception which 
> is hard to interpret from outside as a way to say that all the tests 
> have been run and that we should be generating the report. This also 
> introduces an ill-tested codepath, so Petri and I were wondering why not 
> make it less confusing for the user by just realizing that there are no 
> tests left to run, and just generate the report at this point.
> 
> Any suggestion as to what we should be doing? Maybe we could only send 
> this error when no tests have been run already and there are no runs to 
> be run?
> 
> Thanks,
> Martin

Hi Martin,

I really don't think we should remove the exception, the majority of piglit
users are not wrapping piglit in more infrastructure, I think at this point the
Intel kernel and mesa teams are the only ones doing so, and it does report a
real error.

I do think that it would work if `piglit resume` would essentially become
`piglit summary aggregate` if there are no tests left to run, since tests have
actually run and a result would be non-empty.

Dylan
On 27/06/17 19:31, Dylan Baker wrote:
> Quoting Martin Peres (2017-06-27 03:38:33)
>> On 22/06/17 22:12, Dylan Baker wrote:
>>> Quoting Petri Latvala (2017-06-21 01:37:21)
>>>> On 06/20/2017 08:59 PM, Dylan Baker wrote:
>>>>> Quoting Petri Latvala (2017-06-20 05:41:11)
>>>>>> On 04/13/2017 09:46 PM, Dylan Baker wrote:
>>>>>>> Quoting Brian Paul (2017-04-12 13:04:59)
>>>>>>>> On 04/12/2017 11:55 AM, Dylan Baker wrote:
>>>>>>>>> It doesn't makes sense to run if a user has removed all tests from a
>>>>>>>>> selected profile, and currently if all tests are removed, then an
>>>>>>>>> assertion will be hit in the backend that isn't extremely clear about
>>>>>>>>> what went wrong. This should be much easier to understand.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
>>>>>>>>> ---
>>>>>>>>>       framework/profile.py | 7 +++++++
>>>>>>>>>       1 file changed, 7 insertions(+)
>>>>>>>>>
>>>>>>>>> diff --git a/framework/profile.py b/framework/profile.py
>>>>>>>>> index 4604367e1..ce0b24ce8 100644
>>>>>>>>> --- a/framework/profile.py
>>>>>>>>> +++ b/framework/profile.py
>>>>>>>>> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, concurrency):
>>>>>>>>>           profiles = [(p, list(p.itertests())) for p in profiles]
>>>>>>>>>           log = LogManager(logger, sum(len(l) for _, l in profiles))
>>>>>>>>>
>>>>>>>>> +    # check that after the filters are run there are actually tests to run
>>>>>>>>> +    for p, l in profiles:
>>>>>>>>> +        if not l:
>>>>>>>>> +            raise exceptions.PiglitUserError(
>>>>>>>>> +                'After running filters there are no tests in '
>>>>>>>>> +                'profile "{}"'.format(p.name))
>>>>>> Bumped into an issue caused by this commit with IGT tests.
>>>>>>
>>>>>> When the last test of a run never finishes and you attempt to
>>>>>>
>>>>>>       piglit resume -n test-results-path
>>>>>>
>>>>>> this exception is raised instead of the expected result of finishing up
>>>>>> the run.
>>>>>>
>>>>>>
>>>>>> -- 
>>>>>> Petri Latvala
>>>>>>
>>>>> That is expected with the command line you've supplied.
>>>>>
>>>>> The -n/--no-retry option instructs piglit to not retry tests that started but
>>>>> didn't finish, if you get the same behavior without the -n option that is a bug.
>>>>>
>>>>> Dylan
>>>>
>>>>
>>>> What is the supported method then to loop resume until results.json.bz2
>>>> gets generated? Previous behaviour was that piglit reported that there's
>>>> nothing more to do and generated that.
>>>>
>>>>
>>>> -- 
>>>> Petri Latvala
>>>>
>>>
>>> I would use `piglit summary aggregate` to build the results.json.foo file.
>>
>> Hi Dylan,
>>
>> Petri is on vacation for some time, so let me provide a little more
>> background on why this patch is problematic for our use-case.
>>
>> For automated IGT testing, we have a testlist and reboot the machine as
>> soon as we get a certain error condition or if the test takes more than
>> a specified timeout, which reboots the machine immediately (watchdog).
>>
>> Once the machine has rebooted, we re-deploy the right kernel and resume
>> piglit testing, using the -n option. When the testing is over, piglit
>> generates the results.json.foo file and returns 0. At this point,
>> ezbench considers the execution as done and does its own things.
>>
>> However, since this patch got introduced, if the last test got
>> interrupted, then piglit will raise the PiglitUserError exception which
>> is hard to interpret from outside as a way to say that all the tests
>> have been run and that we should be generating the report. This also
>> introduces an ill-tested codepath, so Petri and I were wondering why not
>> make it less confusing for the user by just realizing that there are no
>> tests left to run, and just generate the report at this point.
>>
>> Any suggestion as to what we should be doing? Maybe we could only send
>> this error when no tests have been run already and there are no runs to
>> be run?
>>
>> Thanks,
>> Martin
> 
> Hi Martin,
> 
> I really don't think we should remove the exception, the majority of piglit
> users are not wrapping piglit in more infrastructure, I think at this point the
> Intel kernel and mesa teams are the only ones doing so, and it does report a
> real error.

Sure, but let's not make it harder for other companies to use piglit, 
especially since piglit is the prefered runner for IGT which is becoming 
less intel-specific (AMD and vc4).

> 
> I do think that it would work if `piglit resume` would essentially become
> `piglit summary aggregate` if there are no tests left to run, since tests have
> actually run and a result would be non-empty.

Yes, that is all we are asking and it would make wrapping into more 
infrastructure easier :)

Feel like cooking the patch or should I do it?

Cheers,
Martin
On 27/06/17 20:44, Martin Peres wrote:
> 
> 
> On 27/06/17 19:31, Dylan Baker wrote:
>> Quoting Martin Peres (2017-06-27 03:38:33)
>>> On 22/06/17 22:12, Dylan Baker wrote:
>>>> Quoting Petri Latvala (2017-06-21 01:37:21)
>>>>> On 06/20/2017 08:59 PM, Dylan Baker wrote:
>>>>>> Quoting Petri Latvala (2017-06-20 05:41:11)
>>>>>>> On 04/13/2017 09:46 PM, Dylan Baker wrote:
>>>>>>>> Quoting Brian Paul (2017-04-12 13:04:59)
>>>>>>>>> On 04/12/2017 11:55 AM, Dylan Baker wrote:
>>>>>>>>>> It doesn't makes sense to run if a user has removed all tests 
>>>>>>>>>> from a
>>>>>>>>>> selected profile, and currently if all tests are removed, then an
>>>>>>>>>> assertion will be hit in the backend that isn't extremely 
>>>>>>>>>> clear about
>>>>>>>>>> what went wrong. This should be much easier to understand.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
>>>>>>>>>> ---
>>>>>>>>>>       framework/profile.py | 7 +++++++
>>>>>>>>>>       1 file changed, 7 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/framework/profile.py b/framework/profile.py
>>>>>>>>>> index 4604367e1..ce0b24ce8 100644
>>>>>>>>>> --- a/framework/profile.py
>>>>>>>>>> +++ b/framework/profile.py
>>>>>>>>>> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, 
>>>>>>>>>> concurrency):
>>>>>>>>>>           profiles = [(p, list(p.itertests())) for p in profiles]
>>>>>>>>>>           log = LogManager(logger, sum(len(l) for _, l in 
>>>>>>>>>> profiles))
>>>>>>>>>>
>>>>>>>>>> +    # check that after the filters are run there are actually 
>>>>>>>>>> tests to run
>>>>>>>>>> +    for p, l in profiles:
>>>>>>>>>> +        if not l:
>>>>>>>>>> +            raise exceptions.PiglitUserError(
>>>>>>>>>> +                'After running filters there are no tests in '
>>>>>>>>>> +                'profile "{}"'.format(p.name))
>>>>>>> Bumped into an issue caused by this commit with IGT tests.
>>>>>>>
>>>>>>> When the last test of a run never finishes and you attempt to
>>>>>>>
>>>>>>>       piglit resume -n test-results-path
>>>>>>>
>>>>>>> this exception is raised instead of the expected result of 
>>>>>>> finishing up
>>>>>>> the run.
>>>>>>>
>>>>>>>
>>>>>>> -- 
>>>>>>> Petri Latvala
>>>>>>>
>>>>>> That is expected with the command line you've supplied.
>>>>>>
>>>>>> The -n/--no-retry option instructs piglit to not retry tests that 
>>>>>> started but
>>>>>> didn't finish, if you get the same behavior without the -n option 
>>>>>> that is a bug.
>>>>>>
>>>>>> Dylan
>>>>>
>>>>>
>>>>> What is the supported method then to loop resume until 
>>>>> results.json.bz2
>>>>> gets generated? Previous behaviour was that piglit reported that 
>>>>> there's
>>>>> nothing more to do and generated that.
>>>>>
>>>>>
>>>>> -- 
>>>>> Petri Latvala
>>>>>
>>>>
>>>> I would use `piglit summary aggregate` to build the results.json.foo 
>>>> file.
>>>
>>> Hi Dylan,
>>>
>>> Petri is on vacation for some time, so let me provide a little more
>>> background on why this patch is problematic for our use-case.
>>>
>>> For automated IGT testing, we have a testlist and reboot the machine as
>>> soon as we get a certain error condition or if the test takes more than
>>> a specified timeout, which reboots the machine immediately (watchdog).
>>>
>>> Once the machine has rebooted, we re-deploy the right kernel and resume
>>> piglit testing, using the -n option. When the testing is over, piglit
>>> generates the results.json.foo file and returns 0. At this point,
>>> ezbench considers the execution as done and does its own things.
>>>
>>> However, since this patch got introduced, if the last test got
>>> interrupted, then piglit will raise the PiglitUserError exception which
>>> is hard to interpret from outside as a way to say that all the tests
>>> have been run and that we should be generating the report. This also
>>> introduces an ill-tested codepath, so Petri and I were wondering why not
>>> make it less confusing for the user by just realizing that there are no
>>> tests left to run, and just generate the report at this point.
>>>
>>> Any suggestion as to what we should be doing? Maybe we could only send
>>> this error when no tests have been run already and there are no runs to
>>> be run?
>>>
>>> Thanks,
>>> Martin
>>
>> Hi Martin,
>>
>> I really don't think we should remove the exception, the majority of 
>> piglit
>> users are not wrapping piglit in more infrastructure, I think at this 
>> point the
>> Intel kernel and mesa teams are the only ones doing so, and it does 
>> report a
>> real error.
> 
> Sure, but let's not make it harder for other companies to use piglit, 
> especially since piglit is the prefered runner for IGT which is becoming 
> less intel-specific (AMD and vc4).
> 
>>
>> I do think that it would work if `piglit resume` would essentially become
>> `piglit summary aggregate` if there are no tests left to run, since 
>> tests have
>> actually run and a result would be non-empty.
> 
> Yes, that is all we are asking and it would make wrapping into more 
> infrastructure easier :)
> 
> Feel like cooking the patch or should I do it?

Ping, QA hit the same issue...

Should I write the fix or do you want to take care of it?

Thanks,
Martin
Quoting Martin Peres (2017-07-11 05:44:43)
> On 27/06/17 20:44, Martin Peres wrote:
> > 
> > 
> > On 27/06/17 19:31, Dylan Baker wrote:
> >> Quoting Martin Peres (2017-06-27 03:38:33)
> >>> On 22/06/17 22:12, Dylan Baker wrote:
> >>>> Quoting Petri Latvala (2017-06-21 01:37:21)
> >>>>> On 06/20/2017 08:59 PM, Dylan Baker wrote:
> >>>>>> Quoting Petri Latvala (2017-06-20 05:41:11)
> >>>>>>> On 04/13/2017 09:46 PM, Dylan Baker wrote:
> >>>>>>>> Quoting Brian Paul (2017-04-12 13:04:59)
> >>>>>>>>> On 04/12/2017 11:55 AM, Dylan Baker wrote:
> >>>>>>>>>> It doesn't makes sense to run if a user has removed all tests 
> >>>>>>>>>> from a
> >>>>>>>>>> selected profile, and currently if all tests are removed, then an
> >>>>>>>>>> assertion will be hit in the backend that isn't extremely 
> >>>>>>>>>> clear about
> >>>>>>>>>> what went wrong. This should be much easier to understand.
> >>>>>>>>>>
> >>>>>>>>>> Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
> >>>>>>>>>> ---
> >>>>>>>>>>       framework/profile.py | 7 +++++++
> >>>>>>>>>>       1 file changed, 7 insertions(+)
> >>>>>>>>>>
> >>>>>>>>>> diff --git a/framework/profile.py b/framework/profile.py
> >>>>>>>>>> index 4604367e1..ce0b24ce8 100644
> >>>>>>>>>> --- a/framework/profile.py
> >>>>>>>>>> +++ b/framework/profile.py
> >>>>>>>>>> @@ -389,6 +389,13 @@ def run(profiles, logger, backend, 
> >>>>>>>>>> concurrency):
> >>>>>>>>>>           profiles = [(p, list(p.itertests())) for p in profiles]
> >>>>>>>>>>           log = LogManager(logger, sum(len(l) for _, l in 
> >>>>>>>>>> profiles))
> >>>>>>>>>>
> >>>>>>>>>> +    # check that after the filters are run there are actually 
> >>>>>>>>>> tests to run
> >>>>>>>>>> +    for p, l in profiles:
> >>>>>>>>>> +        if not l:
> >>>>>>>>>> +            raise exceptions.PiglitUserError(
> >>>>>>>>>> +                'After running filters there are no tests in '
> >>>>>>>>>> +                'profile "{}"'.format(p.name))
> >>>>>>> Bumped into an issue caused by this commit with IGT tests.
> >>>>>>>
> >>>>>>> When the last test of a run never finishes and you attempt to
> >>>>>>>
> >>>>>>>       piglit resume -n test-results-path
> >>>>>>>
> >>>>>>> this exception is raised instead of the expected result of 
> >>>>>>> finishing up
> >>>>>>> the run.
> >>>>>>>
> >>>>>>>
> >>>>>>> -- 
> >>>>>>> Petri Latvala
> >>>>>>>
> >>>>>> That is expected with the command line you've supplied.
> >>>>>>
> >>>>>> The -n/--no-retry option instructs piglit to not retry tests that 
> >>>>>> started but
> >>>>>> didn't finish, if you get the same behavior without the -n option 
> >>>>>> that is a bug.
> >>>>>>
> >>>>>> Dylan
> >>>>>
> >>>>>
> >>>>> What is the supported method then to loop resume until 
> >>>>> results.json.bz2
> >>>>> gets generated? Previous behaviour was that piglit reported that 
> >>>>> there's
> >>>>> nothing more to do and generated that.
> >>>>>
> >>>>>
> >>>>> -- 
> >>>>> Petri Latvala
> >>>>>
> >>>>
> >>>> I would use `piglit summary aggregate` to build the results.json.foo 
> >>>> file.
> >>>
> >>> Hi Dylan,
> >>>
> >>> Petri is on vacation for some time, so let me provide a little more
> >>> background on why this patch is problematic for our use-case.
> >>>
> >>> For automated IGT testing, we have a testlist and reboot the machine as
> >>> soon as we get a certain error condition or if the test takes more than
> >>> a specified timeout, which reboots the machine immediately (watchdog).
> >>>
> >>> Once the machine has rebooted, we re-deploy the right kernel and resume
> >>> piglit testing, using the -n option. When the testing is over, piglit
> >>> generates the results.json.foo file and returns 0. At this point,
> >>> ezbench considers the execution as done and does its own things.
> >>>
> >>> However, since this patch got introduced, if the last test got
> >>> interrupted, then piglit will raise the PiglitUserError exception which
> >>> is hard to interpret from outside as a way to say that all the tests
> >>> have been run and that we should be generating the report. This also
> >>> introduces an ill-tested codepath, so Petri and I were wondering why not
> >>> make it less confusing for the user by just realizing that there are no
> >>> tests left to run, and just generate the report at this point.
> >>>
> >>> Any suggestion as to what we should be doing? Maybe we could only send
> >>> this error when no tests have been run already and there are no runs to
> >>> be run?
> >>>
> >>> Thanks,
> >>> Martin
> >>
> >> Hi Martin,
> >>
> >> I really don't think we should remove the exception, the majority of 
> >> piglit
> >> users are not wrapping piglit in more infrastructure, I think at this 
> >> point the
> >> Intel kernel and mesa teams are the only ones doing so, and it does 
> >> report a
> >> real error.
> > 
> > Sure, but let's not make it harder for other companies to use piglit, 
> > especially since piglit is the prefered runner for IGT which is becoming 
> > less intel-specific (AMD and vc4).
> > 
> >>
> >> I do think that it would work if `piglit resume` would essentially become
> >> `piglit summary aggregate` if there are no tests left to run, since 
> >> tests have
> >> actually run and a result would be non-empty.
> > 
> > Yes, that is all we are asking and it would make wrapping into more 
> > infrastructure easier :)
> > 
> > Feel like cooking the patch or should I do it?
> 
> Ping, QA hit the same issue...
> 
> Should I write the fix or do you want to take care of it?
> 
> Thanks,
> Martin

Doh, I wrote a fix but got bogged down trying to figure out how to test it. I'll
just send it out as is.

Dylan