Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Re-run specific tests
samholder
#1 Posted : Friday, November 23, 2018 12:36:24 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/11/2012(UTC)
Posts: 94

Thanks: 28 times
Was thanked: 12 time(s) in 12 post(s)
Hi.

We have some UI tests which we are running on grid nodes. The build runs a particular engine mode which runs tests in a specific category. This is working ok, but sometimes the tests fail for some random reasons, mainly due to selenium/UI testing infrastructure intermittent issues.

I'd like to be able to rerun the build but specify a list of tests that should be run, and not run them all in the category. What is the best way of handling this? Am I going to have to have a powersheel script which modifies the definition of the engine mode dynamically before the tests are run? Or does NCrunch support a way to do this out of the box?

Cheers,

Sam
Remco
#2 Posted : Friday, November 23, 2018 11:03:20 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,123

Thanks: 957 times
Was thanked: 1286 time(s) in 1193 post(s)
Hi Sam,

Thanks for posting. The best way I can think of to handle this would be to have a second engine mode that only runs tests that have been marked as failed.

Once the first run completes and the intermittent tests fail, they'll be stored in the NCrunch .cache file with a failed status.

The second build run then starts up to target only the failed tests, and runs only those ones again.

The test results from the second build will be the results you're after. If a test fails in both builds, then it's much less likely to be an intermittent failure.
1 user thanked Remco for this useful post.
samholder on 11/23/2018(UTC)
samholder
#3 Posted : Friday, November 23, 2018 11:18:19 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/11/2012(UTC)
Posts: 94

Thanks: 28 times
Was thanked: 12 time(s) in 12 post(s)
Hey Remco, thanks for the quick reply as ever. I had considered this after playing with the Engine mode options a bit, but its not 100% clear to me which cache file will be in play here. The Ncrunch console tool gets invoked on the TC build agent. This is done with 0 threads, so the tests can't be run on that machine itself. These are then farmed out to the grid nodes to be run. When we run the test build again it may not happen on the same TC agent, but will be farmed out to the same collection of grid nodes. Obviously the individual tests may end up on different grid nodes.

So which cache file will be used to determine if the tests failed or not? it seems to me like there could be problems either way. if its the one on the agent then different agent = diffferent cache files, and so wrong tests might be run. If its on the grid nodes themselves then what happens if tests end up on different nodes?

or am I overthinking this?
Remco
#4 Posted : Saturday, November 24, 2018 11:37:37 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,123

Thanks: 957 times
Was thanked: 1286 time(s) in 1193 post(s)
The cache file is usually stored on the machine running NCrunch.exe. This file won't be stored on the grid nodes, although they do have cached data of a different kind (not relevant in this scenario).

The contents of the cache file can be quite important when using the NCrunch console tool in the build system, because NCrunch will use the data here to form a picture of what has changed in the solution since the last time the tests were run. This means that if you're running NCrunch.exe on different TC build agents, the cache file won't be consistent between runs and your build will function less efficiently. It does also mean that my suggestion wouldn't work, because if the runs were being handled by different build agents then the list of failed tests might not be available on the second run.

So you probably have a couple of options here. Either you can restrict the running of NCrunch.exe so it only happens on one of your build agents, or you can modify the storage directory of the NCrunch .cache file so that it lands on a shared network drive that is used by all the build agents. Note that sharing the cache file between build agents does give a slight chance of concurrency issues.
samholder
#5 Posted : Tuesday, December 4, 2018 2:30:23 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/11/2012(UTC)
Posts: 94

Thanks: 28 times
Was thanked: 12 time(s) in 12 post(s)
I solved this problem by collecting the failed tests via the TC HTTP API in a powershell script and then parsing out the names and creating a piece of text which matches NCrunchs definition for running specific tests like so:

(FullNameMatchesRegex 'failed test 1' OR FullNameMatchesRegex 'failed test 2' OR FullNameMatchesRegex 'failed test 2')

Then trigger the build that failed again passing this string in via the HTTP API as a build parameter. Then in the build I check if this parametger has a value and if it does then I edit the ncrunch.v3.solutionconfig file to replace the <Settings>
<TestsToExecuteAutomatically> to have this text, and then NCrunch only executes those tests.

bit hacky, but it works. Or seems to superficially :)
Remco
#6 Posted : Tuesday, December 4, 2018 11:11:01 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,123

Thanks: 957 times
Was thanked: 1286 time(s) in 1193 post(s)
samholder;12879 wrote:
I solved this problem by collecting the failed tests via the TC HTTP API in a powershell script and then parsing out the names and creating a piece of text which matches NCrunchs definition for running specific tests like so:

(FullNameMatchesRegex 'failed test 1' OR FullNameMatchesRegex 'failed test 2' OR FullNameMatchesRegex 'failed test 2')

Then trigger the build that failed again passing this string in via the HTTP API as a build parameter. Then in the build I check if this parametger has a value and if it does then I edit the ncrunch.v3.solutionconfig file to replace the <Settings>
<TestsToExecuteAutomatically> to have this text, and then NCrunch only executes those tests.

bit hacky, but it works. Or seems to superficially :)


Brilliant! If I were to give an award for the most creative solution to an NCrunch related problem, you've definitely taken it for this year.
GreenMoose
#7 Posted : Wednesday, December 5, 2018 8:57:19 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 6/17/2012(UTC)
Posts: 507

Thanks: 145 times
Was thanked: 66 time(s) in 64 post(s)
FWIW, since you use TeamCity. I store the NCrunch cache directory as an artifact for the build. In the following build the first step will download that artifact (for same branch, if available) via TC API in order to utilize "impacted or failed tests only" engine mode upon next build.
applieddev
#8 Posted : Monday, October 18, 2021 10:49:54 AM(UTC)
Rank: Member

Groups: Registered
Joined: 11/13/2019(UTC)
Posts: 19
Location: United Kingdom

Was thanked: 1 time(s) in 1 post(s)
we came across the issue of partial regex matches using FullNameMatchesRegex
example test names:
  • Import Test
  • Import Test to backup
  • AutoImport Test


Quote:
(FullNameMatchesRegex 'Import Test')

then it will run all 3 example tests (as they all match that partial regex)

Quote:
(FullNameMatchesRegex 'Import Test$')

then it will run 2 example tests (Import Test and AutoImport Test)

Quote:
(FullNameMatchesRegex '^Import Test$')

then it runs 0 tests

what are we missing to do an exact name regex match?
Remco
#9 Posted : Tuesday, October 19, 2021 1:18:02 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,123

Thanks: 957 times
Was thanked: 1286 time(s) in 1193 post(s)
Internally, NCrunch uses System.Text.RegularExpressions.Regex.IsMatch to match the regex to the physical name of the test.

There is a tricky catch here. The test name for the regex's match doesn't use the tests 'Display name' that is shown in the Tests Window. Instead, it uses the name for the test that is used to identify it internally. This seems to have a bit of legacy as in older versions of NCrunch these names used to be the same thing, and in most cases they still are ... however, I don't think it's actually possible to create a physical test name that contains a space in it. Is it possible the name of your test should actually be something like 'MyProject.MyNamespace.ImportFixture.ImportTest'? You should be able to find the physical name of the test by turning on the 'Full Test Name' column in the Tests Window.
applieddev
#10 Posted : Tuesday, October 19, 2021 9:02:43 AM(UTC)
Rank: Member

Groups: Registered
Joined: 11/13/2019(UTC)
Posts: 19
Location: United Kingdom

Was thanked: 1 time(s) in 1 post(s)
thanks for the explanation Remco
we use FullNameMatchesRegex with spaces and that works

i've updated our regex match to include the fixture name and that works better
Quote:
(FullNameMatchesRegex '\.ImportFixture\.Import Test$')


also, extending on samholder's solution, to reduce the total character length when there's 100s of tests to rerun
Quote:
(FullNameMatchesRegex '(\.ImportFixture\.Import Test$)|(\.ImportFixture\.AutoImport Test$)|(\.ImportFixture\.Import Test to backup$)')
1 user thanked applieddev for this useful post.
Remco on 10/19/2021(UTC)
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.100 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download