Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

It still happens that sometimes a root node is marked red, while its children are all green
abelb
#1 Posted : Friday, July 28, 2017 12:04:28 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 9/12/2014(UTC)
Posts: 155
Location: Netherlands

Thanks: 19 times
Was thanked: 11 time(s) in 11 post(s)
In this solution with 2450 tests, all tests succeed. however, after failing a few of them (to test the previous post on NCrunch using two "current directories"), and in general, while I am working and some tests are failing, then when the test window updates, the results are displayed incorrectly.

I have not yet found a way to consistently reproduce it. It definitely happens less often than in the past, where I reported this before (n the 2.x days). Here's an example screenshot:

Remco
#3 Posted : Friday, July 28, 2017 5:06:01 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,967

Thanks: 929 times
Was thanked: 1256 time(s) in 1169 post(s)
Hi,

Are you certain that the last run of the fixture wasn't a valid failure? There are some valid scenarios where this can happen. For example, if the fixture has (or had) a failed test that is not showing in the list of tests because it's been removed from the test suite or isn't set to show in the tests window. Such a scenario could be more common if the tests are being generated dynamically (i.e. NUnit TestCaseSource) because there is a higher chance of them being removed or renamed as a result of a code change. The next time any test in the fixture is run, the failure should clear.
abelb
#4 Posted : Friday, July 28, 2017 11:35:40 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 9/12/2014(UTC)
Posts: 155
Location: Netherlands

Thanks: 19 times
Was thanked: 11 time(s) in 11 post(s)
The set of tests have been stable for a few days.

But what might have happened is that to create a repro for this report (Current-dir-is-different-during-test-creation-than-during-test-execution.aspx) that I temporarily had failing tests on the previous run from that. In fact, I think I remember there were some tests that this dangerous one shown:

Quote:
"This test was not executed during a planned execution run. Ensure your test project is stable and does not contain issues in initialisation/teardown fixtures."
.

But the total number of tests was certainly the same (the other bug, in this scenario, was triggered during test buildup, but did not influence the number, or names, of tests created for this particular project).

It is true, however, that nearly all my tests are created using a customized inherited NUnit TestCaseSource attribute (I've jumped through some hoops to have the test-generator create a failing test in case the TestCaseSource attribute fails, because NCrunch, nor NUnit, reports anything in that case, but that's another story and not a bug and did not happen in this particular case).

In short: the failed tests prior to this happening where either of the two mentioned errors from the beforementioned post. Which suggests that the behavior there caused the behavior here.
Remco
#5 Posted : Friday, July 28, 2017 11:52:25 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,967

Thanks: 929 times
Was thanked: 1256 time(s) in 1169 post(s)
Under NCrunch, the fixture is considered to be a test result in itself. The failure result of a fixture will always be from the last time the fixture was executed. So you could have one call into the fixture for one of its child tests that passes, then have a later call into the fixture for a child test that fails. The results from the fixture will be from the failed test run and the fixture won't contain any data for the passing test run.

It's notable that a fixture's result is more than just the pass and fail result - it also contains code coverage/performance data and trace output. So for a fixture's pass/fail result to be entirely derived from its child tests isn't really structurally correct. I guess NCrunch could do this in the UI, but it would have the potential to look confusing as in some cases it could present failed result data as passing (or vice versa).

As an experienced user I realise you probably already understand all of the above. I thought it worth adding to this thread in case others get confused by this situation. The relationship between child tests and their parent fixtures gets very complex in the selective execution scenarios used by NCrunch.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.041 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download