Hi, thanks for posting!
At the moment, this isn't possible. It may happen one day, although I have a feeling that it won't be soon. At one stage, there was actually an effort made to build a console runner for NCrunch that could be run on a build server to execute tests in parallel. This failed experiment was a good learning experience and it highlighted many of the differences between a true continuous test runner and an end-to-end sequential runner. With a lot of time and effort, I think it would be possible to make this work with NCrunch, but I feel that there are other higher value features that will be implemented first.
The problems with using a continuous testing engine on a build server stem from the vast difference in design and purpose between these two tools:
- "True" continuous testing is cyclic. It relies on concurrent passes through tests in order to give results quickly, where on a build server you basically have one end-to-end run (which I accept may or may not be executed in parallel)
- Build servers do more than just run tests and return results - they also produce workable build artifacts that can be used for deploying software into a production environment. This is a very different purpose when compared with NCrunch, where the objective is to create an isolated sandbox that can be ripped up and manipulated to provide meaningful data quickly
- At the level of a developer workstation, tests that give false failures are an irritation - you can just re-run them and they go away (hopefully you'll eventually fix them too). At build server level, false failures can be a much bigger problem - as they interrupt the entire team and usually require expensive troubleshooting or a full re-run of the build in order to resolve. When executing tests in parallel, the likelihood of tests giving false failures is greatly increased because of the additional system load and complexity (i.e. greater chance of undeclared resource conflict, system load can surface latent race conditions, etc). As solutions grow in size and number of tests, this becomes more of a problem. The priority of a build server is to give a clean and consistent run of all builds and tests to produce reliable results. By executing tests using an engine with intelligent prioritisation, impact analysis and parallelisation, you end up removing much of the consistency from the equation, thus making the build harder to maintain and troubleshoot. I feel that before such an engine could be put in place, additional diagnostics features would be needed before it would really add value.
I hope this makes sense :)
Cheers,
Remco