[HACK/RFC] make piglit-summary.py print things out in a more useful way

Submitted by Rob Clark on Nov. 25, 2015, 4:32 p.m.

Details

Message ID 1448469168-22702-1-git-send-email-robdclark@gmail.com
State New
Headers show
Series "make piglit-summary.py print things out in a more useful way" ( rev: 1 ) in Piglit

Not browsing as part of any series.

Commit Message

Rob Clark Nov. 25, 2015, 4:32 p.m.
Complete hack, but maybe we want to make something like this an optional
way that piglit-summary dumps out results.

Basically it groups all the results according to transition (ie. 'pass
-> fail' or 'fail -> fail -> pass', etc., and then for each group dumps
out the test environment and cmdline.

This gives me something I can easily cut/paste to rerun.  For example,
a common use-case for me while debugging some fix/feature/etc on the
driver side, is to diff results between a baseline run and most recent
run.  And as I debug/fix regressions on driver side, I tend to first
want to re-run the set of tests that had gone pass->fail.  The old way
involved './piglit-summary.py -d baseline latest | grep "pass fail"'
then finding the results.json (which is slightly more annoying because
of the whole s/@/\// thing) and cut/paste the cmdline.  A somewhat time
consuming and annoying way to do things.

There is still the slight problem of how to escape special chars in
piglit cmdline.  Seriously, cmdline args like "*Lod" are a horrible
idea.
---
fwiw, example output: http://hastebin.com/raw/pezotoyoje

 framework/results.py          | 11 +++++++++++
 framework/summary/console_.py | 33 ++++++++++++++++++++++++++++++---
 2 files changed, 41 insertions(+), 3 deletions(-)

Patch hide | download patch | download mbox

diff --git a/framework/results.py b/framework/results.py
index eeffcb7..fa43cf6 100644
--- a/framework/results.py
+++ b/framework/results.py
@@ -308,6 +308,17 @@  class TestrunResult(object):
             except KeyError:
                 raise e
 
+    def get_result_object(self, key):
+        """Similar to get_result() but returns result object"""
+        try:
+            return self.tests[key]
+        except KeyError as e:
+            name, test = grouptools.splitname(key)
+            try:
+                return self.tests[name]
+            except KeyError:
+                raise e
+
     def calculate_group_totals(self):
         """Calculate the number of pases, fails, etc at each level."""
         for name, result in self.tests.iteritems():
diff --git a/framework/summary/console_.py b/framework/summary/console_.py
index d219498..ebc7adb 100644
--- a/framework/summary/console_.py
+++ b/framework/summary/console_.py
@@ -89,10 +89,37 @@  def _print_summary(results):
 
 def _print_result(results, list_):
     """Takes a list of test names to print and prints the name and result."""
+    # Setup a hashtable mapping transition (ie. 'pass -> fail' to list of
+    # result objects for test that followed that transition (ie. first
+    # result is 'pass' and second result is 'fail').  This could of course
+    # mean transition strings like 'pass -> fail -> pass' if there were
+    # three sets of results.  I guess the normal use case would be to
+    # compare two sets of results.
+    #
+    # Note that we just keep the last result object, but it is expected
+    # that command/environment are the same across piglit runs.
+    groups = {}
     for test in sorted(list_):
-        print("{test}: {statuses}".format(
-            test=grouptools.format(test),
-            statuses=' '.join(str(r) for r in results.get_result(test))))
+        transition = None
+        last = None
+        for each in results.results:
+            status = str(each.get_result(test))
+            if transition is None:
+                transition = status
+            else:
+                transition = ' -> '.join([transition, status])
+            last = each
+        result = last.get_result_object(test)
+        if not transition in groups:
+           groups[transition] = []
+        groups[transition].append(result)
+
+    # And now print out results grouped by transition.
+    for transition, resultlist in groups.iteritems():
+        print(transition + ':')
+        for result in resultlist:
+            print(result.environment + ' ' + result.command)
+        print('')
 
 
 def console(results, mode):

Comments

On Wed, Nov 25, 2015 at 11:32:48AM -0500, Rob Clark wrote:
> Complete hack, but maybe we want to make something like this an optional
> way that piglit-summary dumps out results.
> 
> Basically it groups all the results according to transition (ie. 'pass
> -> fail' or 'fail -> fail -> pass', etc., and then for each group dumps
> out the test environment and cmdline.
> 
> This gives me something I can easily cut/paste to rerun.  For example,
> a common use-case for me while debugging some fix/feature/etc on the
> driver side, is to diff results between a baseline run and most recent
> run.  And as I debug/fix regressions on driver side, I tend to first
> want to re-run the set of tests that had gone pass->fail.  The old way
> involved './piglit-summary.py -d baseline latest | grep "pass fail"'
> then finding the results.json (which is slightly more annoying because
> of the whole s/@/\// thing) and cut/paste the cmdline.  A somewhat time
> consuming and annoying way to do things.

Just FYI, you dont ned to worry about the 's!/!@!g' thing, -t and -x
handle that automagically ;)

> 
> There is still the slight problem of how to escape special chars in
> piglit cmdline.  Seriously, cmdline args like "*Lod" are a horrible
> idea.
> ---
> fwiw, example output: http://hastebin.com/raw/pezotoyoje
> 
>  framework/results.py          | 11 +++++++++++
>  framework/summary/console_.py | 33 ++++++++++++++++++++++++++++++---
>  2 files changed, 41 insertions(+), 3 deletions(-)
> 
> diff --git a/framework/results.py b/framework/results.py
> index eeffcb7..fa43cf6 100644
> --- a/framework/results.py
> +++ b/framework/results.py
> @@ -308,6 +308,17 @@ class TestrunResult(object):
>              except KeyError:
>                  raise e
>  
> +    def get_result_object(self, key):
> +        """Similar to get_result() but returns result object"""
> +        try:
> +            return self.tests[key]
> +        except KeyError as e:
> +            name, test = grouptools.splitname(key)
> +            try:
> +                return self.tests[name]
> +            except KeyError:
> +                raise e
> +
>      def calculate_group_totals(self):
>          """Calculate the number of pases, fails, etc at each level."""
>          for name, result in self.tests.iteritems():
> diff --git a/framework/summary/console_.py b/framework/summary/console_.py
> index d219498..ebc7adb 100644
> --- a/framework/summary/console_.py
> +++ b/framework/summary/console_.py
> @@ -89,10 +89,37 @@ def _print_summary(results):
>  
>  def _print_result(results, list_):
>      """Takes a list of test names to print and prints the name and result."""
> +    # Setup a hashtable mapping transition (ie. 'pass -> fail' to list of
> +    # result objects for test that followed that transition (ie. first
> +    # result is 'pass' and second result is 'fail').  This could of course
> +    # mean transition strings like 'pass -> fail -> pass' if there were
> +    # three sets of results.  I guess the normal use case would be to
> +    # compare two sets of results.
> +    #
> +    # Note that we just keep the last result object, but it is expected
> +    # that command/environment are the same across piglit runs.
> +    groups = {}
>      for test in sorted(list_):
> -        print("{test}: {statuses}".format(
> -            test=grouptools.format(test),
> -            statuses=' '.join(str(r) for r in results.get_result(test))))
> +        transition = None
> +        last = None
> +        for each in results.results:
> +            status = str(each.get_result(test))
> +            if transition is None:
> +                transition = status
> +            else:
> +                transition = ' -> '.join([transition, status])
> +            last = each
> +        result = last.get_result_object(test)
> +        if not transition in groups:
> +           groups[transition] = []
> +        groups[transition].append(result)
> +
> +    # And now print out results grouped by transition.
> +    for transition, resultlist in groups.iteritems():
> +        print(transition + ':')
> +        for result in resultlist:
> +            print(result.environment + ' ' + result.command)
> +        print('')
>  
>  
>  def console(results, mode):
> -- 
> 2.5.0
> 
> _______________________________________________
> Piglit mailing list
> Piglit@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/piglit

Hi Rob,

Somewhere I have a patch that automates this, you point it at two
results and then give it a set of statuses and it reruns the tests that
match. Something like:

piglit rerurn res1 res2 new pass fail,warn,crash

And it gives you a new result with just those changes. Is that what
you're looking for?

Dylan
On Wed, Nov 25, 2015 at 1:58 PM, Dylan Baker <baker.dylan.c@gmail.com> wrote:
> On Wed, Nov 25, 2015 at 11:32:48AM -0500, Rob Clark wrote:
>> Complete hack, but maybe we want to make something like this an optional
>> way that piglit-summary dumps out results.
>>
>> Basically it groups all the results according to transition (ie. 'pass
>> -> fail' or 'fail -> fail -> pass', etc., and then for each group dumps
>> out the test environment and cmdline.
>>
>> This gives me something I can easily cut/paste to rerun.  For example,
>> a common use-case for me while debugging some fix/feature/etc on the
>> driver side, is to diff results between a baseline run and most recent
>> run.  And as I debug/fix regressions on driver side, I tend to first
>> want to re-run the set of tests that had gone pass->fail.  The old way
>> involved './piglit-summary.py -d baseline latest | grep "pass fail"'
>> then finding the results.json (which is slightly more annoying because
>> of the whole s/@/\// thing) and cut/paste the cmdline.  A somewhat time
>> consuming and annoying way to do things.
>
> Just FYI, you dont ned to worry about the 's!/!@!g' thing, -t and -x
> handle that automagically ;)

hmm, well usually for the 'rerun things that regressed to see if I
fixed it' pass I'm not using full piglit run w/ -t/-x but just fishing
out the cmdlines to run them one by one..

for example, frequently I re-run them one by one w/ before/after mesa
+ cmdstream logging and diff the cmdstream..

>>
>> There is still the slight problem of how to escape special chars in
>> piglit cmdline.  Seriously, cmdline args like "*Lod" are a horrible
>> idea.
>> ---
>> fwiw, example output: http://hastebin.com/raw/pezotoyoje
>>
>>  framework/results.py          | 11 +++++++++++
>>  framework/summary/console_.py | 33 ++++++++++++++++++++++++++++++---
>>  2 files changed, 41 insertions(+), 3 deletions(-)
>>
>> diff --git a/framework/results.py b/framework/results.py
>> index eeffcb7..fa43cf6 100644
>> --- a/framework/results.py
>> +++ b/framework/results.py
>> @@ -308,6 +308,17 @@ class TestrunResult(object):
>>              except KeyError:
>>                  raise e
>>
>> +    def get_result_object(self, key):
>> +        """Similar to get_result() but returns result object"""
>> +        try:
>> +            return self.tests[key]
>> +        except KeyError as e:
>> +            name, test = grouptools.splitname(key)
>> +            try:
>> +                return self.tests[name]
>> +            except KeyError:
>> +                raise e
>> +
>>      def calculate_group_totals(self):
>>          """Calculate the number of pases, fails, etc at each level."""
>>          for name, result in self.tests.iteritems():
>> diff --git a/framework/summary/console_.py b/framework/summary/console_.py
>> index d219498..ebc7adb 100644
>> --- a/framework/summary/console_.py
>> +++ b/framework/summary/console_.py
>> @@ -89,10 +89,37 @@ def _print_summary(results):
>>
>>  def _print_result(results, list_):
>>      """Takes a list of test names to print and prints the name and result."""
>> +    # Setup a hashtable mapping transition (ie. 'pass -> fail' to list of
>> +    # result objects for test that followed that transition (ie. first
>> +    # result is 'pass' and second result is 'fail').  This could of course
>> +    # mean transition strings like 'pass -> fail -> pass' if there were
>> +    # three sets of results.  I guess the normal use case would be to
>> +    # compare two sets of results.
>> +    #
>> +    # Note that we just keep the last result object, but it is expected
>> +    # that command/environment are the same across piglit runs.
>> +    groups = {}
>>      for test in sorted(list_):
>> -        print("{test}: {statuses}".format(
>> -            test=grouptools.format(test),
>> -            statuses=' '.join(str(r) for r in results.get_result(test))))
>> +        transition = None
>> +        last = None
>> +        for each in results.results:
>> +            status = str(each.get_result(test))
>> +            if transition is None:
>> +                transition = status
>> +            else:
>> +                transition = ' -> '.join([transition, status])
>> +            last = each
>> +        result = last.get_result_object(test)
>> +        if not transition in groups:
>> +           groups[transition] = []
>> +        groups[transition].append(result)
>> +
>> +    # And now print out results grouped by transition.
>> +    for transition, resultlist in groups.iteritems():
>> +        print(transition + ':')
>> +        for result in resultlist:
>> +            print(result.environment + ' ' + result.command)
>> +        print('')
>>
>>
>>  def console(results, mode):
>> --
>> 2.5.0
>>
>> _______________________________________________
>> Piglit mailing list
>> Piglit@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/piglit
>
> Hi Rob,
>
> Somewhere I have a patch that automates this, you point it at two
> results and then give it a set of statuses and it reruns the tests that
> match. Something like:
>
> piglit rerurn res1 res2 new pass fail,warn,crash
>
> And it gives you a new result with just those changes. Is that what
> you're looking for?

Interesting..

The big problem is the time it takes to slurp in previous run results
is non-trivial.. although I did have this dream-idea of a feature of a
sort of curses based results browser where I could filter on (for
example) pass->fail or pass->*, and then re-run the selected set (and
then iterate that as I debug/fix things on driver side without exiting
result browser, so without having to re-load results at each step)

That said, a small modification of what you have to dump out a list of
cmdlines+env would accomplish more or less the same thing in a less
flashy way ;-)

BR,
-R
On Wed, Nov 25, 2015 at 02:21:18PM -0500, Rob Clark wrote:
> On Wed, Nov 25, 2015 at 1:58 PM, Dylan Baker <baker.dylan.c@gmail.com> wrote:
> > On Wed, Nov 25, 2015 at 11:32:48AM -0500, Rob Clark wrote:
> >> Complete hack, but maybe we want to make something like this an optional
> >> way that piglit-summary dumps out results.
> >>
> >> Basically it groups all the results according to transition (ie. 'pass
> >> -> fail' or 'fail -> fail -> pass', etc., and then for each group dumps
> >> out the test environment and cmdline.
> >>
> >> This gives me something I can easily cut/paste to rerun.  For example,
> >> a common use-case for me while debugging some fix/feature/etc on the
> >> driver side, is to diff results between a baseline run and most recent
> >> run.  And as I debug/fix regressions on driver side, I tend to first
> >> want to re-run the set of tests that had gone pass->fail.  The old way
> >> involved './piglit-summary.py -d baseline latest | grep "pass fail"'
> >> then finding the results.json (which is slightly more annoying because
> >> of the whole s/@/\// thing) and cut/paste the cmdline.  A somewhat time
> >> consuming and annoying way to do things.
> >
> > Just FYI, you dont ned to worry about the 's!/!@!g' thing, -t and -x
> > handle that automagically ;)
> 
> hmm, well usually for the 'rerun things that regressed to see if I
> fixed it' pass I'm not using full piglit run w/ -t/-x but just fishing
> out the cmdlines to run them one by one..
> 
> for example, frequently I re-run them one by one w/ before/after mesa
> + cmdstream logging and diff the cmdstream..
> 
> >>
> >> There is still the slight problem of how to escape special chars in
> >> piglit cmdline.  Seriously, cmdline args like "*Lod" are a horrible
> >> idea.
> >> ---
> >> fwiw, example output: http://hastebin.com/raw/pezotoyoje
> >>
> >>  framework/results.py          | 11 +++++++++++
> >>  framework/summary/console_.py | 33 ++++++++++++++++++++++++++++++---
> >>  2 files changed, 41 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/framework/results.py b/framework/results.py
> >> index eeffcb7..fa43cf6 100644
> >> --- a/framework/results.py
> >> +++ b/framework/results.py
> >> @@ -308,6 +308,17 @@ class TestrunResult(object):
> >>              except KeyError:
> >>                  raise e
> >>
> >> +    def get_result_object(self, key):
> >> +        """Similar to get_result() but returns result object"""
> >> +        try:
> >> +            return self.tests[key]
> >> +        except KeyError as e:
> >> +            name, test = grouptools.splitname(key)
> >> +            try:
> >> +                return self.tests[name]
> >> +            except KeyError:
> >> +                raise e
> >> +
> >>      def calculate_group_totals(self):
> >>          """Calculate the number of pases, fails, etc at each level."""
> >>          for name, result in self.tests.iteritems():
> >> diff --git a/framework/summary/console_.py b/framework/summary/console_.py
> >> index d219498..ebc7adb 100644
> >> --- a/framework/summary/console_.py
> >> +++ b/framework/summary/console_.py
> >> @@ -89,10 +89,37 @@ def _print_summary(results):
> >>
> >>  def _print_result(results, list_):
> >>      """Takes a list of test names to print and prints the name and result."""
> >> +    # Setup a hashtable mapping transition (ie. 'pass -> fail' to list of
> >> +    # result objects for test that followed that transition (ie. first
> >> +    # result is 'pass' and second result is 'fail').  This could of course
> >> +    # mean transition strings like 'pass -> fail -> pass' if there were
> >> +    # three sets of results.  I guess the normal use case would be to
> >> +    # compare two sets of results.
> >> +    #
> >> +    # Note that we just keep the last result object, but it is expected
> >> +    # that command/environment are the same across piglit runs.
> >> +    groups = {}
> >>      for test in sorted(list_):
> >> -        print("{test}: {statuses}".format(
> >> -            test=grouptools.format(test),
> >> -            statuses=' '.join(str(r) for r in results.get_result(test))))
> >> +        transition = None
> >> +        last = None
> >> +        for each in results.results:
> >> +            status = str(each.get_result(test))
> >> +            if transition is None:
> >> +                transition = status
> >> +            else:
> >> +                transition = ' -> '.join([transition, status])
> >> +            last = each
> >> +        result = last.get_result_object(test)
> >> +        if not transition in groups:
> >> +           groups[transition] = []
> >> +        groups[transition].append(result)
> >> +
> >> +    # And now print out results grouped by transition.
> >> +    for transition, resultlist in groups.iteritems():
> >> +        print(transition + ':')
> >> +        for result in resultlist:
> >> +            print(result.environment + ' ' + result.command)
> >> +        print('')
> >>
> >>
> >>  def console(results, mode):
> >> --
> >> 2.5.0
> >>
> >> _______________________________________________
> >> Piglit mailing list
> >> Piglit@lists.freedesktop.org
> >> http://lists.freedesktop.org/mailman/listinfo/piglit
> >
> > Hi Rob,
> >
> > Somewhere I have a patch that automates this, you point it at two
> > results and then give it a set of statuses and it reruns the tests that
> > match. Something like:
> >
> > piglit rerurn res1 res2 new pass fail,warn,crash
> >
> > And it gives you a new result with just those changes. Is that what
> > you're looking for?
> 
> Interesting..
> 
> The big problem is the time it takes to slurp in previous run results
> is non-trivial.. although I did have this dream-idea of a feature of a
> sort of curses based results browser where I could filter on (for
> example) pass->fail or pass->*, and then re-run the selected set (and
> then iterate that as I debug/fix things on driver side without exiting
> result browser, so without having to re-load results at each step)
> 
> That said, a small modification of what you have to dump out a list of
> cmdlines+env would accomplish more or less the same thing in a less
> flashy way ;-)
> 
> BR,
> -R

That sounds more like what you want is an extension to the
piglit-print-commands.py script, which takes a profile and prints all of
the command line arguments for each test in that profile.

Dylan