The test execution behaviour of NCrunch vs Resharper is very, very different.
Resharper constructs a single test process and application domain, then runs through ALL tests sequentially until its hits the bottom.
NCrunch will break up all the tests into a number of different tasks, then each task can result in a new process and application domain being created to run the tests contained within it. Processes are re-used between tasks where possible, but the test runner must still be reinitialised for each task.
The main advantage of NCrunch's batching is that it allows many sets of tests to run in parallel, and it allows the queue of tests to be managed dynamically, which is important while you are working in your codebase and the execution priorities of tests are constantly changing. The downside is that there is more overhead involved with this approach, especially for larger assemblies with more tests, as the test framework (i.e. NUnit) still needs to scan and process each test in the assembly while kicking off each batch.
This means that if you have a suite of lots and lots of small tests, you'll likely receive faster throughput with a serial test runner (such as Resharper) than you would with NCrunch, because the overhead of the parallelisation and prioritisation is higher than the execution time of the the tests.
However, it's extremely important to note the difference between throughput and relevant test results. Although in this situation NCrunch may take longer to run the entire test pipeline, it will still execute the most relevant tests first, so there is no need for you to wait for an entire test run to confirm whether or not your code is right. Generally when working with many fast running tests, the scope of each test is limited, meaning that a change to the codebase will probably only require the execution of a very small number of tests - which will be executed first by NCrunch.
With that said, there are things you can check to reduce the overhead of NCrunch's processing queue:
1. When using NUnit, try setting your
framework utilisation type to StaticAnalysis. This will cause NCrunch to use its own logic to discover NUnit tests during instrumentation, avoiding the need for an extra analysis step using NUnit. Note that NCrunch's static discovery logic isn't a 100% match for NUnit - so not all NUnit features are fully supported using this approach, but it should be a whole world faster. Based on your description, I have a feeling that the sluggishness is being caused by NUnit needing to discover a vast quantity of TestCase tests, which is something that isn't done by Resharper because Resharper always looks for tests statically.
2. Never judge an NCrunch test run by its first pass. When it first executes all your tests, NCrunch will not have information about the size and run time of each of your tests, so it will batch them very aggressively. This means the first full test run can often take many times longer than a typical run made after the information has been collected.
3. Try turning off the 'Instrument output assembly' project-level configuration option for ALL the projects in your solution. NCrunch's instrumentation does add weight, and this can slow down the code under test. Testing with this option disabled can identify whether or not the problem is being caused by NCrunch's instrumentation - in which case there may be code blocks you can knock out with
code coverage suppression.
4. Have a read of the
performance tuning guide, as this contains tips about a number of configuration options that can reduce cross-batch overhead (particularly the
max test runners to pool configuration setting).
5. When setting up NCrunch using the Configuration Wizard, note that asking the wizard to prioritise memory efficiency over processing times can create a noticeable difference in processing overhead.
6. Avoid creating vast swarms of tests using NUnit TestCase attributes. These attributes can add massive amounts of overhead with comparatively very little code, as each test case forms an individual test that must have a whole arrangement of data collected and stored against it (code coverage, exceptions, trace output, etc). Where possible, try to design these tests so they are grouped together to make them less granular, as too many small tests can create an unnecessary amount of work for the engine.
I hope this helps!
Cheers,
Remco