Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Some Tests Fail - Possibly those that have a reference to a specific third party DLL
NCrunched
#1 Posted : Friday, August 24, 2012 5:50:38 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 8/24/2012(UTC)
Posts: 3

I am unable to run some tests successfully. I see the following error in the Output window:

ERROR (Internal) : System.InvalidOperationException : Process must exit before requested information can be determined.

It seems that only those tests seem to failing with the above error that have a reference to a specific third party library.
The same tests run using any other test / coverage tool.
Remco
#2 Posted : Friday, August 24, 2012 10:24:23 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Hi, thanks for posting!

This looks like a runtime issue with the test, likely caused by NCrunch's workspacing. Cross-process tests often require a bit of extra configuration in order to work correctly under NCrunch, as NCrunch takes a very different approach in wiring together test application domains during execution.

I recommend having a look at the test troubleshooting guide. This may help to give you some more clues on things to look for that can often cause tests to behave strangely under NCrunch.
NCrunched
#3 Posted : Saturday, August 25, 2012 5:49:34 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 8/24/2012(UTC)
Posts: 3

Thanks for your prompt reply.

Based on the link ( https://www.ncrunch.net/...-runtime-related-issues) you provided, the two possible reasons in my scenario seemed to be :
Implicit File Dependencies
Assumptions About Referenced Assembly Locations

I have taken the steps as mentioned to resolve the above two issues.
This seemed to have resolved the issue;

However, now, it appears that the same test cases (the ones that were always failing and were resolved as mentioned above) NOW sometimes pass and sometimes fail when I run the the same tests repeatedly.
Remco
#4 Posted : Saturday, August 25, 2012 11:07:27 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Inconsistent pass/fail can be a symptom of the following:
* Race conditions in the test/code (particularly if the code is multi-threaded or cross-process)
* Sequence dependency - for example, some tests may have been written with the accidental assumption of other tests being run before them (or not run before them) inside the test process
* Shared use of resources (for example, if you have two tests making use of the same database and these tests are running in parallel - which can happen if you configure NCrunch to do this)

NCrunch can surface all of the above issues in situations where other test runners would not. The inline code coverage in NCrunch is your best friend when troubleshooting issues such as this, as it can provide you with a clear snapshot of the execution path of a test after this test has failed. After you've experienced an intermittent failure of one of your tests, try right clicking on the test inside the Tests Window and choosing 'Advanced->Show coverage for this test only', then browse through you code to analyse what the test has touched. Hopefully the problem isn't inside a separate process or application domain (as NCrunch will only track code coverage inside the test domain that it can control). Otherwise you may be able to fish it out by breaking into the code with a debugger, though this will involve some level of understanding of the pass/fail pattern.
NCrunched
#5 Posted : Saturday, August 25, 2012 12:02:59 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 8/24/2012(UTC)
Posts: 3

Remco;2718 wrote:
Inconsistent pass/fail can be a symptom of the following:
* Race conditions in the test/code (particularly if the code is multi-threaded or cross-process)
* Sequence dependency - for example, some tests may have been written with the accidental assumption of other tests being run before them (or not run before them) inside the test process
* Shared use of resources (for example, if you have two tests making use of the same database and these tests are running in parallel - which can happen if you configure NCrunch to do this).


In my case, I was trying to run only a single test case repeatedly and it was failing / passing randomly.
Hence, the above mentioned scenarios , namely - Race conditions, Sequence Dependency, Shares use of resources - probably does not apply.

Could you please advise?
Remco
#6 Posted : Saturday, August 25, 2012 10:50:33 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
If you're receiving different results from the test simply by running and rerunning the test in isolation, then this is almost certainly caused by a race condition or a state related issue.

Examine the test carefully to see which resources it touches that exist outside the testing process (for example, files on disk, open network sockets, databases, etc). Ensure that these resources are left in a consistent state when the test has finished executing. The code coverage of the test after it has failed could yield important clues around which resources it tried to interact with when it failed. It is good practice to try and engineer tests in such a way that they will always leave external resources in the same state as when the tests began execution. Use database transactions where possible, randomise file names and socket ports, etc. These sorts of issues can sometimes lie dormant when working with other test runners as they often tend to encourage running all the tests in a certain sequence that NCrunch likely won't be following.

If this is one of your tests that works across multiple processes or application domains, race conditions may inherently exist in the code but may only now be surfacing because of the weight of NCrunch's instrumentation or the manner in which it works with the test framework. A good way to try to analyse race conditions in large complex tests is to introduce trace output for areas of the code where contention may be a problem. You can then examine the sequence of the trace messages when the test fails to try and find races.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.091 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download