Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Re-running failed tests via NCrunch grid and TeamCity
Phonesis
#1 Posted : Thursday, April 14, 2016 1:54:59 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 4/14/2016(UTC)
Posts: 32
Location: United Kingdom

Was thanked: 3 time(s) in 3 post(s)
Hi,

I haven't been able to find any documentation on how to do this (if it's even possible).

We have an NCrunch grid set up with 3 grid machines in TeamCity. A 4th machine acts as the controller and get's called by TeamCity with a build step passing in the NCrunch.exe command to run all of the tests. These run on the 3 grid machines.

Generally, most pass but we get a fair number of failures that tend to pass if re-run. We are trying to figure out a way to instruct the grid to re-run these failures as part of a final build step in TeamCity.

Here's what I have tried:

-Created a final step that calls NCrunch.exe and passes in a parameter to run it on a custom engine mode for re-running failed tests (/E ReRunFailures) This mode uses the "Is Failing" condition set in NCrunch Custom Engine configuration.

This custom engine works for local runs within Visual Studio. However, when called in TeamCity it doesn't seem to register the previous failures from the last run. Instead, it seems to just run all of the tests again.

So is there any way to achieve this? Whether via custom engine modes or some other approach?

Many thanks.
Remco
#2 Posted : Thursday, April 14, 2016 11:22:37 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi, thanks for posting.

Most likely this is caused by a lack of cached test results on the TeamCity controller.

When NCrunch shuts down or completes a run of the console tool, it will store all results (including code coverage and timing information) inside a .cache file underneath an _NCrunch_ subdirectory next to the solution file. This cache file is actually very important, as the data collected from tests during their execution is used by the engine to make decisions when it runs them later. The first run through of your tests is always very inefficient, as NCrunch doesn't know how long they take and how to effectively batch them for best performance.

Because the cache file also contains the pass/fail state of the test during its last run, it would be required for your engine mode to work properly. My guess is that because you're running this on a CI server (perhaps with a fresh checkout from VCS), the file doesn't exist where NCrunch expects it, and it just runs all the tests without any state data collected from the previous run.

A new setting was recently introduced that allows you to control where NCrunch stores its cached data for a solution. I'd suggest changing this to an absolute file path that you can be sure always exists on the CI server. In this way, you can be sure that the CI system won't clean up the file, and it will always be there on a fresh checkout. The setting is http://www.ncrunch.net/documentation/reference_solution-configuration_ncrunch-cache-storage-path.
Phonesis
#3 Posted : Friday, April 15, 2016 8:49:23 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 4/14/2016(UTC)
Posts: 32
Location: United Kingdom

Was thanked: 3 time(s) in 3 post(s)
Remco;8597 wrote:
Hi, thanks for posting.

Most likely this is caused by a lack of cached test results on the TeamCity controller.

When NCrunch shuts down or completes a run of the console tool, it will store all results (including code coverage and timing information) inside a .cache file underneath an _NCrunch_ subdirectory next to the solution file. This cache file is actually very important, as the data collected from tests during their execution is used by the engine to make decisions when it runs them later. The first run through of your tests is always very inefficient, as NCrunch doesn't know how long they take and how to effectively batch them for best performance.

Because the cache file also contains the pass/fail state of the test during its last run, it would be required for your engine mode to work properly. My guess is that because you're running this on a CI server (perhaps with a fresh checkout from VCS), the file doesn't exist where NCrunch expects it, and it just runs all the tests without any state data collected from the previous run.

A new setting was recently introduced that allows you to control where NCrunch stores its cached data for a solution. I'd suggest changing this to an absolute file path that you can be sure always exists on the CI server. In this way, you can be sure that the CI system won't clean up the file, and it will always be there on a fresh checkout. The setting is http://www.ncrunch.net/documentation/reference_solution-configuration_ncrunch-cache-storage-path.



Thanks Remco, much appreciated. Will try this. In your link, it mentiones the path must be relative to the solution? Can I do something like:

<NCrunchCacheStoragePath>C:\NCrunchCacheFiles\</NCrunchCacheStoragePath>

Am I right in saying that passing in the engine mode via the /E parameter for the NCrunch.exe should allow the grid machines to switch modes? The modes themselves are defined in the solution level ncrunch config file by looks of it. Do they need to be added to any other config files such as the global one that sits on the controller machine?
Remco
#4 Posted : Friday, April 15, 2016 11:57:47 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Phonesis;8606 wrote:


Thanks Remco, much appreciated. Will try this. In your link, it mentiones the path must be relative to the solution? Can I do something like:

<NCrunchCacheStoragePath>C:\NCrunchCacheFiles\</NCrunchCacheStoragePath>


Yes! That should work correctly. The configuration setting will accept absolute file paths since v2.20.


Phonesis;8606 wrote:

Am I right in saying that passing in the engine mode via the /E parameter for the NCrunch.exe should allow the grid machines to switch modes? The modes themselves are defined in the solution level ncrunch config file by looks of it. Do they need to be added to any other config files such as the global one that sits on the controller machine?


The modes are defined in the .ncrunchsolution file. They only exist in this file - there's no need to place them in your global config. Normally, the .ncrunchsolution file should be checked into your VCS, so it should be present on the CI server. The grid nodes don't care about engine modes, as the engine modes are a client-side concept that is used in deciding which tests can be added to the processing queue.

The grid nodes themselves are actually quite 'stupid'. They don't have much knowledge of the state of the NCrunch engine. They simply request work from clients that have connected to them, then provide the results back to the client. In this case, the client is the CI server running NCrunch.exe.
Phonesis
#5 Posted : Saturday, April 16, 2016 11:29:41 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 4/14/2016(UTC)
Posts: 32
Location: United Kingdom

Was thanked: 3 time(s) in 3 post(s)
Remco;8607 wrote:
[quote=Phonesis;8606]

The modes are defined in the .ncrunchsolution file. They only exist in this file - there's no need to place them in your global config. Normally, the .ncrunchsolution file should be checked into your VCS, so it should be present on the CI server. The grid nodes don't care about engine modes, as the engine modes are a client-side concept that is used in deciding which tests can be added to the processing queue.

The grid nodes themselves are actually quite 'stupid'. They don't have much knowledge of the state of the NCrunch engine. They simply request work from clients that have connected to them, then provide the results back to the client. In this case, the client is the CI server running NCrunch.exe.



Thanks again. I have got the cache relocated ok on the controller machine now and it seems to store it in the desired location without issue.

However, I still can't seem to get the Custom Engine for only running failed tests working. Should this mode only need the "IsFailing" condition from the NCrunch settings page enabled? I have added it and it is checked into the Ncrunch solution config file in TeamCity. I run a build step immediately after a full run that calls NCrunch.exe /E "RunFailingTests"

The step seems to simply just re run all the tests again and not the failures from the previous step. Am I doing something wrong?
Remco
#6 Posted : Monday, April 18, 2016 12:26:27 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Your understanding is correct - the IsFailing flag should be the only one you need.

I've just done some testing on my side with the same scenario, as I personally haven't tested these features used together in quite the same way as you're running them right now. The engine functioned largely as expected (it did only run the failing tests), but I noticed something a bit misleading in the way the test results are reported. When the console tool creates the HTML reports describing the test run, it populates the tests in these reports using ALL accumulated data in the NCrunch model. This includes both data from the test run AND from the .cache file, so if you're using the HTML report to evaluate which tests were executed in the run, you'll likely see a very misleading result. The best way to check this is behaving correctly is by examining the actual trace log from the console tool itself.
Phonesis
#7 Posted : Wednesday, April 27, 2016 11:30:26 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 4/14/2016(UTC)
Posts: 32
Location: United Kingdom

Was thanked: 3 time(s) in 3 post(s)
Remco;8612 wrote:
Your understanding is correct - the IsFailing flag should be the only one you need.

I've just done some testing on my side with the same scenario, as I personally haven't tested these features used together in quite the same way as you're running them right now. The engine functioned largely as expected (it did only run the failing tests), but I noticed something a bit misleading in the way the test results are reported. When the console tool creates the HTML reports describing the test run, it populates the tests in these reports using ALL accumulated data in the NCrunch model. This includes both data from the test run AND from the .cache file, so if you're using the HTML report to evaluate which tests were executed in the run, you'll likely see a very misleading result. The best way to check this is behaving correctly is by examining the actual trace log from the console tool itself.


Hi Remco,

Have managed to get this working with a custom engine mode being passed to the console tool in TC as part of a build config. It appears to work most times but still frequently just doesn't seem to pick up any failures.

We do an initial run of all our tests and the failures are logged in the XML output. The re-run build is then triggered by TC. We have ensured the cache files stay stored in a folder on the C drive of the controller machine.

Just seems to often fail to pick up the failures and completes without running anything. Any ideas what might cause this?
Remco
#8 Posted : Wednesday, April 27, 2016 12:22:14 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Phonesis;8681 wrote:

Just seems to often fail to pick up the failures and completes without running anything. Any ideas what might cause this?


The only reason I could think of behind this being intermittent would be if NCrunch wasn't able to load from the cache file, in which case all tests would be considered in the 'Not Run' state and they wouldn't be failing, so they wouldn't be run by your engine mode.

Do you see any data in the HTML report output from the engine when it doesn't run the tests?
Phonesis
#9 Posted : Thursday, April 28, 2016 9:26:01 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 4/14/2016(UTC)
Posts: 32
Location: United Kingdom

Was thanked: 3 time(s) in 3 post(s)
Remco;8684 wrote:
Phonesis;8681 wrote:

Just seems to often fail to pick up the failures and completes without running anything. Any ideas what might cause this?


The only reason I could think of behind this being intermittent would be if NCrunch wasn't able to load from the cache file, in which case all tests would be considered in the 'Not Run' state and they wouldn't be failing, so they wouldn't be run by your engine mode.

Do you see any data in the HTML report output from the engine when it doesn't run the tests?



We do get the HTML report still. It just shows all tests as Not Run. None are flagged as having failed. It usually does work though. Does sound like it's failing to find the cache file for some reason.
Remco
#10 Posted : Thursday, April 28, 2016 11:16:50 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Phonesis;8693 wrote:

We do get the HTML report still. It just shows all tests as Not Run. None are flagged as having failed. It usually does work though. Does sound like it's failing to find the cache file for some reason.


My suspicion is that something might be wrong with the cache file, or NCrunch is experiencing an error while trying to load it.

Does your log from the CI run show any exceptions being thrown? If NCrunch dumps the log file, usually there will be an error kicked up when this happens.
Phonesis
#11 Posted : Thursday, April 28, 2016 2:00:50 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 4/14/2016(UTC)
Posts: 32
Location: United Kingdom

Was thanked: 3 time(s) in 3 post(s)
Remco;8696 wrote:
Phonesis;8693 wrote:

We do get the HTML report still. It just shows all tests as Not Run. None are flagged as having failed. It usually does work though. Does sound like it's failing to find the cache file for some reason.


My suspicion is that something might be wrong with the cache file, or NCrunch is experiencing an error while trying to load it.

Does your log from the CI run show any exceptions being thrown? If NCrunch dumps the log file, usually there will be an error kicked up when this happens.


No issues appear to be flagged in the CI build log. The console tool exits with an OK status. If I discover anything more about when this occurs I'll let you know.
Remco
#12 Posted : Thursday, April 28, 2016 8:00:58 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Phonesis;8698 wrote:

No issues appear to be flagged in the CI build log. The console tool exits with an OK status. If I discover anything more about when this occurs I'll let you know.


Thanks. If you can find any steps to reproduce this or manage to get an exception somewhere, please let me know.
sheryl
#13 Posted : Friday, September 2, 2016 4:37:42 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/15/2015(UTC)
Posts: 18
Location: United States of America

Thanks: 12 times
Was thanked: 3 time(s) in 3 post(s)
Hi Remco - if the rerun logic uses NCrunch cache file, in our case, we have multiple test suites (belonging to the same solution) triggered from the same CI server (Jenkins) around the same time. In this case how will it identify the failed test cases that belongs to a particular run?

for e.g: say, I have a test suite with 100 tests. From CI server, will kick off a run in say Environment 1, another in Environment 2.

If first run has 5 failures and second run has only 2, how does the cache file get updated(in the same CI server) .In this scenario, will it pull the specific run's failed tests in the "Rerun" logic?

Remco
#14 Posted : Friday, September 2, 2016 10:49:14 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi Sheryl,

If both runs are being executed on the same CI server at the same time, then the contents of the cache file will be decided by whichever run finishes latest.

This is because the console tool will always read from the cache from when it starts up, then write to it when it shuts down and completes the run.

The cache file itself isn't used for the console tool reports - these are written out as static files derived from the in-memory contents of the run. The cache file is only used to preserve run state between sessions. So you don't need to be worried about the cache file affecting the actual results of your test runs.

When doing console runs, the only really important information in the cache file is the test timing information which is used by the engine to batch the tests intelligently. For this reason, it's probably safe for the tool to overwrite the cache file from any other recently completed runs, as the new cache data will still contain relevant test timings that actually may be more current/accurate than the ones being overwritten.

Where it's possible to get into trouble here is if you are sharing a cache file between widely diverging branches of your solution. For example, if one of the branches of your solution includes a change made to a namespace of your entire test suite, the cache file then can't be intelligently shared between branches, as all the tests will have different names. If you have very dissimilar branches of your solution being run through the same CI server, I recommend changing the name of the solution (.sln) files to match their branch names so that the cache files will be named separately by NCrunch.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.101 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download