Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Failing tests are not run with higher priority
abelb
#1 Posted : Wednesday, November 15, 2017 10:56:56 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 9/12/2014(UTC)
Posts: 133
Location: Netherlands

Thanks: 13 times
Was thanked: 11 time(s) in 11 post(s)
I seem to recall that NCrunch favors failing tests, understandably, because the programmer would want to know whether a fix will have had impact on such failing tests.

However, I notice that, at least when there are a lot of failing tests, this doesn't happen. For instance, all successful tests are put in batch jobs many together (jobs of 200-600 tests each) and those jobs get a priority of 141. At the same, all failing tests are put in jobs of 8 tests each, with priority 90.

Result: slow running because of many jobs (dozens of minutes vs 1-2 minutes when everything is successful).

It is probably important to note that failing tests report an "actual processing time" many times higher than when run by hand (between 200-500ms). This may be because NCrunch requires a lot of time to collect the data on the failing test (like unwinding the stacktrace). But still, this time seems larger than necessary (most tests run under 5ms, also the ones that test exception-throwing, meaning that these failing tests do not run so slow if they aren't run in NCrunch).

This particular test set has over 21,000 tests and only 5,000 are green/successful. Still, I don't see why NCrunch's heuristic ends up this way, as it seems such a detrimental outcome (as opposed to running many more tests at once in parallel).

PS (edit): (not sure this is related, but it caught my attention) I took a look at the I/O delta read p/sec, which shows this, which is roughly 10x lower than if I only run the green (successful) tests. So *something* seems to be slowing these down to a crawl:

Remco
#2 Posted : Thursday, November 16, 2017 5:55:57 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 5,133

Thanks: 679 times
Was thanked: 818 time(s) in 778 post(s)
At the moment, NCrunch's prioritisation system works as follows:

High priority (targeted by UI) +10000
Failing test +100
Impacted test +300
Test never run before +200
Test is pinned +200

There are then modifications of these values as the tests are grouped together (averaging) and sliced into different execution brackets to eventually create tasks. This includes a logarithmic adjustment that accounts for test execution time. The modifications are too complex to easily summarise here, but of interest is that the engine will assign more priority to larger groups of tests if they are in one task. This would mean that if you have a large number of fast passing tests all landing in the same task, the engine would put more priority on executing this task since it probably contains 5000 tests, whereas the other tasks only contain 8.

So probably the most interesting question here would be why those failing tests are taking a longer time to execute.

Which test framework is being used here? Do you see the same sorts of result with any non-ncrunch test runner? The 'actual execution time' as capture by NCrunch only includes the time taken to execute the test as reported by the framework, so this won't include any extra processing NCrunch does around exceptions or serialization.
abelb
#3 Posted : Thursday, November 16, 2017 6:17:22 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 9/12/2014(UTC)
Posts: 133
Location: Netherlands

Thanks: 13 times
Was thanked: 11 time(s) in 11 post(s)
Thanks for your response. This is NUnit 3.6. Running it from NUnit's test runner, it runs all tests in approx. 3 minutes. This test runner runs them synchronously. Looking at the failing tests, most run in 3-5ms it seems. Only a handful of tests seem to have a genuinely slow execution speed (above 10sec). The average of all tests (success and failing) is 9ms per test.

Bottom line, there seems to be no significant difference between the failing and successful tests. It would indeed be interesting to find out what happens here. I can send you a compiled executable, if that helps, or the sources of the test project (but in a previous thread you told me that such large projects aren't of much use for you to analyze).

Here's a capture of running it in NUnit:

Remco
#4 Posted : Thursday, November 16, 2017 10:32:10 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 5,133

Thanks: 679 times
Was thanked: 818 time(s) in 778 post(s)
Do you have a performance profiler installed? It would be interesting to see if you could capture the source of the lag on the failing test using a profiler. My assumption at the moment is that it's inside the test framework, so it won't be reported using any of NCrunch's own performance tracking.
Users browsing this topic
Guest (2)
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.032 seconds.