[vtk-developers] cdash/gerrit emails about failing tests...

Bill Lorensen bill.lorensen at gmail.com
Wed Jan 30 19:23:32 EST 2013


I agree on all counts. We should fix the low hanging fruit.

Sent from my iPad

On Jan 30, 2013, at 6:43 PM, David Gobbi <david.gobbi at gmail.com> wrote:

> Hi Bill,
> 
> I'm just saying that the situation isn't quite as bad as the dashboard
> makes it look.  And, to be honest, I remember plenty of times in years
> past where the dashboard was much worse for extended periods of time.
> 
> As far as the dashboard is concerned, the number of things that have
> to be fixed is small and quite manageable:
> 
> 1) The coverage machine needs to be more stable, you can't be doing
> coverage on a bleeding-edge system.
> 
> 2) The 25 tests that fail on all machines must be fixed.  This is a pretty
> small number.  Heck, in the past I've fixed that number of failing tests
> by myself in a week during my spare time.  Unfortunately I don't have
> as much spare time as I used to.  But I can take 5 of the 25.
> 
> 3) Valgrind tests.  Most developers ignore this part of the dashboard
> completely.  This is not good.
> 
> There would be a #4, compiler warnings, but the dashboard is
> remarkably clean in this regard, so warnings are a low priority at the
> moment.
> 
> Now the overall issue of developer participation in the code quality
> process... that's a much bigger issue than the dashboard alone.
> Is it a mentorship issue, i.e. are new developers not being taught
> the "ways of the source"?  Are there too many developers, i.e. too
> many cats to herd?  Does gerrit make it too time-consuming to
> submit follow-up fixes when people break the dashboard?  (I myself
> have found that some developers do not respond when I ask for
> a review... and I feel guilty about going to the "reliable" reviewers
> over and over again).
> 
> - David
> 
> 
> 
> 
> 
> On Wed, Jan 30, 2013 at 3:10 PM, Bill Lorensen <bill.lorensen at gmail.com> wrote:
>> David,
>> 
>> In years past, I gave many talks bragging about the high quality of
>> our toolkits. I would often give a live demo and point to the nightly
>> dashboard. We and others used software quality as a selling point of
>> our commitment to open source processes. I know for certain that we
>> won at least two large government grants because of our committment to
>> quality.
>> 
>> We also gave many GE internal talks, taunting our process and I
>> believe many GE businesses to improve their software processes.
>> 
>> I suspect that you, as our first outside developer, also promoted the
>> quality of VTK.
>> 
>> Bill
>> 
>> On Wed, Jan 30, 2013 at 4:57 PM, Bill Lorensen <bill.lorensen at gmail.com> wrote:
>>> I'm saying that the machine that reports coverage and the machine that
>>> runs valgrind tests less than 1/2 the code.
>>> 
>>> I agree that there are so many failing tests that we have no idea
>>> about the quality of vtk.
>>> 
>>> In the past, we bragged about our process. We cannot do that anymore.
>>> 
>>> Bill
>>> 
>>> On Wed, Jan 30, 2013 at 4:51 PM, David Gobbi <david.gobbi at gmail.com> wrote:
>>>> On Wed, Jan 30, 2013 at 1:25 PM, Bill Lorensen <bill.lorensen at gmail.com> wrote:
>>>> 
>>>>> Coverage is down to 44%. This means we test less than 1/2 of vtk's code.
>>>>> Why? Because over 900 tests are failing on the coverage machine:
>>>>> http://open.cdash.org/viewTest.php?onlyfailed&buildid=2789553
>>>> 
>>>> Your statement that we test less than 1/2 of the code is false.  There are
>>>> some dashboard machines (e.g. hythloth) cover much more.  I know that
>>>> I'm being picky with semantics here, but the truth is, we have so many
>>>> failing tests that the dashboard isn't even able to produce accurate code
>>>> quality metrics.
>>>> 
>>>> - David
>>> 
>>> 
>>> 
>>> --
>>> Unpaid intern in BillsBasement at noware dot com
>> 
>> 
>> 
>> --
>> Unpaid intern in BillsBasement at noware dot com



More information about the vtk-developers mailing list