Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

"This test was not executed during a planned execution run" error
bartj
#1 Posted : Monday, August 27, 2018 5:57:38 PM(UTC)
Rank: Member

Groups: Registered
Joined: 12/4/2013(UTC)
Posts: 26
Location: New Zealand

Thanks: 2 times
Was thanked: 3 time(s) in 3 post(s)
Hi,

When I start a full run of our test suite using NCrunch in Visual Studio, I will end up with somewhere between 5 and 50 or so tests that fail with the following error:

This test was not executed during a planned execution run. Ensure your test project is stable and does not contain issues in initialisation/teardown fixtures.

The error will usually go away when I manually rerun the tests.

Note that most of the tests do not use any kind of test case source or setup/teardown code, and that I didn’t see any of these errors when running the entire suite via NCrunch Console.

I have submitted a bug report via the Visual Studio menu, and sent detailed logs via this website. Apart from the logs, which didn't reveal any obvious problems to my untrained eye, is there any way of diagnosing this generic error?

Thanks,
Bart
Remco
#2 Posted : Tuesday, August 28, 2018 12:29:13 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,970

Thanks: 929 times
Was thanked: 1256 time(s) in 1169 post(s)
Hi Bart,

Thanks for sharing this problem. I tried to communicate the details of this via email last week, but I'm not sure if you received it. I'll copy my analysis of the issue here:

The problem is an NUnit ID desynchronization between the test discovery step and the execution step. Basically, when it discovers your tests, NUnit gives each test an ID number, which is sequentially generated in the order in which the test appears during discovery.

These IDs are the only way we can safely identify tests to NUnit, due to limitations in the way the framework is designed. Therefore it's fundamentally important that the IDs are the same between the discovery step and the execution step, even though these steps may run in different processes or even on different machines (i.e. grid nodes).

For some reason, your test suite is generating a variable number of tests between the discovery step and the execution one. So NCrunch asks your test assembly for a list of tests it contains, and it receives one number. When it then launches a process to run those same tests using the exact same version of your source code, it asks for a list of tests, and receives a different number. So the IDs no longer match. This causes identification problems when assigning results and there is instability. This would really mess with your test results throughout the whole test project.

This problem almost certainly arises from the use of the TestCaseSource attribute. This attribute lets you define your own tests by returning them as an array. So if you have one of these that pulls test rows out of a database or anywhere else that might have variable data, then you'll get this problem. I recommend checking through all tests you have in your system tests assembly that make use of TestCaseSource and ensuring that they return consistent data between discovery runs.

I've made a note to add a catch to the engine to report this situation more sensibly. This was fiendishly difficult to analyse.
bartj
#3 Posted : Tuesday, August 28, 2018 5:27:13 PM(UTC)
Rank: Member

Groups: Registered
Joined: 12/4/2013(UTC)
Posts: 26
Location: New Zealand

Thanks: 2 times
Was thanked: 3 time(s) in 3 post(s)
Hi Remco,

Unfortunately, I can't find that email anywhere, but thanks for finding the cause of the issue!

Is it only important that the test case sources return the same number of tests between executions, or do the test names also need to be identical? How about the ordering of the results from the test case source?

I can't think of any test case sources in our code that would return a different number of cases between runs (although we have almost 9000 tests, so it's difficult to keep track of them all), but there may be some other subtle changes happening such as with the values of the test case being different (e.g. a few test cases might be time based), or the order of the test cases may be undefined.

We'll have a look through the code and the NCrunch logs to see if we can hunt down the source of the problem. As you say, a clearer diagnostic would be helpful as well.

Again, thanks for looking into this!
Bart
Remco
#4 Posted : Tuesday, August 28, 2018 10:25:08 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,970

Thanks: 929 times
Was thanked: 1256 time(s) in 1169 post(s)
bartj;12586 wrote:

Is it only important that the test case sources return the same number of tests between executions, or do the test names also need to be identical? How about the ordering of the results from the test case source?


The ordering of tests is the critical point here. If you had a test case that showed up only intermittently, but it was the last test case in the suite, then the scope of the problem would be greatly reduced.

When it invokes NUnit to discover tests and your log verbosity is set to Detailed, NCrunch will dump a whole lot of XML into the log that details all the output of NUnit's discovery step.

If you search through several instances of this XML, you'll likely find two dumps with different data. You can then put these through a comparison tool (i.e. KDiff3) to see how they've changed. This should highlight the unstable test cases and make the problem much easier to narrow down. The key problem here is that between the discovery steps, the test's NUnit IDs are different.

We're planning to implement something to detect and report this problem. Unfortunately, it'll be a fairly simple catch that can only tell you the problem exists without telling you which tests are causing it. Examining the discovery XML is definitely the way to go.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.038 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download