Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Integration tests, creating databases with SetUpFixture
nrjohnstone
#1 Posted : Sunday, November 8, 2015 7:29:00 PM(UTC)
Rank: Member

Groups: Registered
Joined: 7/1/2015(UTC)
Posts: 12
Location: New Zealand

Thanks: 1 times
Was thanked: 2 time(s) in 2 post(s)
We are running our integration tests using NCrunch, and have a SetUpFixture at the namespace to create the databases before the first test runs, with a GUID in the name so each NCrunch process gets a unique database.

This is all working fine (and thanks for sorting out the issue the other day Remco with SetUpFixture) but I find now that I am getting many more databases created than I expected.

There is a teardown in the SetUpFixture that cleans up the databases, but I was expecting each test runner process to create the database once, then run all of its tests using that database, then tear it down.

Given that I have 4 threads for running tests, and 400 tests I would have expected roughly around 100 tests per test runner and only 4 databases to be created.

However, when I select Run on the 400 tests, they seem to be divided up into a long list of batches of 8 and each time one of the them starts it creates the databases for the 8 tests, then tears them down, then does the same thing for the next 8 tests !

This increases the time to run all 400 tests massively.

I've tried setting the Exclusively uses attribute on the TestFixture and what happens after this is interesting

* If I run the tests in debug mode, I get all 400 tests running in a single queued test process. This only creates the database once, runs all 400 tests and then tears it down. Excellent
* If I run the tests in Run mode, I get the 400 tests spread in the queue again in varying batch sizes from 8 to 97 (it varies each time) and each time one of these starts, the databases are created all over again.

Why to debug and run behave differently?

Should NCrunch be able to work as follows

4 test threads
400 tests

run all 400 tests, and get 4 groups of 100 tests queued... why do I keep getting a batches of 8 tests and the SetUpFixture continually being run....

Is there a way to make the test runners just stay up and initialized once and then run all the tests that are directed to them?
Remco
#2 Posted : Sunday, November 8, 2015 9:27:36 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi, thanks for sharing this issue.

Everything you've described is exactly as designed, and at first look it seems crazy and clumsy, but there is a good reason for it.

NCrunch is a fully continuous runner - it uses a processing queue containing a list of tests to execute that is fully dynamic. As you make changes to your codebase while NCrunch runs the tests, the ideal execution order of the tests changes, and NCrunch continues to update the queue.

This runs counter to the design of test frameworks like NUnit, where the framework is designed for a single end-to-end run of all (or a select few) tests in a given codebase. The only way NCrunch can make NUnit work with a dynamic execution order is by making small calls into the test framework, running the tests in batches. Depending upon the execution time of your tests, you may have a large number of batches to execute your 400 tests (look for the number of test tasks in the Processing Queue Window).

Because of the way it's designed, NUnit will execute a SetUpFixture for every call into the test environment, with the corresponding tear down happening on exit from the call.

The mechanics of this can make a test run take far longer than it needs to if there are expensive operations in the SetUpFixture. The good news is that it's pretty easy to design your code so this won't be a problem for you.

Add a static field to your SetUpFixture that prevents it from being run more than once for the same process (i.e. private static bool hasBeenRun). See here for an example: http://www.ncrunch.net/documentation/considerations-and-constraints_test-atomicity.

The TearDown code is a bit trickier. The problem with placing cleanup code in a teardown is that you actually can't be 100% sure that this code is run. There are a number of things that can stop a teardown from running (i.e. sudden process termination, power outage, out of memory exception, stack overflow exception, etc). Even without NCrunch in the picture, this makes the teardown a bad place to remove your database. Under NCrunch, teardowns just won't work for this sort of cleanup because you'll have no way of knowing when a test run has actually completed - and NCrunch won't know this either, because it's true continuous testing.

The best option here is to instead concentrate on cleanup up stale databases in the system at the start of the next test run, instead of making each run clean up after itself. Instead of naming your databases using a GUID, try naming them using the windows ID of the currently executing process (Process.GetCurrentProcess().Id). When a test run starts, you can query the DB server for a list of databases, and compare this against the list of running processes on the computer. Any test database that exists on the DB server that doesn't have a matching process under execution can be safely removed. This also has a nice advantage in giving you a way to view the database left over from a test run after a failure, as they won't get cleaned up until the next test run starts.
nrjohnstone
#3 Posted : Sunday, November 8, 2015 9:39:33 PM(UTC)
Rank: Member

Groups: Registered
Joined: 7/1/2015(UTC)
Posts: 12
Location: New Zealand

Thanks: 1 times
Was thanked: 2 time(s) in 2 post(s)
Excellent advice.. NCrunch is fairly addictive and I am maybe taking things a bit far by wanting our integration tests to also be continuously run :-)

I had my blinkers on with regards to uniqueness and hadn't considered using the process ID.

I have tried the static boolean to prevent multiple runs and this didn't seem to work as I still get instances where the static boolean is false, which probably indicates a new process has been started.

Do you think there are other test frameworks that would be better suited to avoid NUnits design limitations in this matter?
Remco
#4 Posted : Sunday, November 8, 2015 9:53:56 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
nrjohnstone;7968 wrote:
Excellent advice.. NCrunch is fairly addictive and I am maybe taking things a bit far by wanting our integration tests to also be continuously run :-)


I would say that if you haven't yet managed to get all your tests running continuous (and distributed), you're missing out!

nrjohnstone;7968 wrote:

I had my blinkers on with regards to uniqueness and hadn't considered using the process ID.


The Process ID is very, very useful when working with NCrunch .. as it's one thing that is guaranteed to be unique across multiple processes executing in parallel.

nrjohnstone;7968 wrote:

I have tried the static boolean to prevent multiple runs and this didn't seem to work as I still get instances where the static boolean is false, which probably indicates a new process has been started.


If a new process has started, then we do definitely want the boolean to be false and the setup code to be run again. A new process can broadly be one of three things:
1. A fresh execution run of a whole new version of the codebase (you've made a change)
2. A new parallel session (i.e. you already have one session running, NCrunch is starting another to add capacity)
3. You've told NCrunch to spin up a fresh new process to run some tests

In each case, you want the setup code to run, and you want a new database. You don't want separate test batches sharing the same database if they're running in parallel, as you'll likely see database locking issues.

nrjohnstone;7968 wrote:

Do you think there are other test frameworks that would be better suited to avoid NUnits design limitations in this matter?

[/quote]

Unfortunately not. I think the only way we're likely to see this in the near future is if NCrunch has its own test framework, but I'd prefer to avoid this as it will be fully proprietary and wouldn't work with anything else.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.045 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download