Hi Alberto,
Thanks for posting! devDept looks like a very interesting company and I appreciate the time you're investing to take a look at NCrunch. I hope that the tool will exceed your expectations through your trial!
I'll try to answer your above issues as best I can in the order you've presented them.
In the last two hours, this website just had a major upgrade and redesign which included replacing many parts of the forum (specifically user account related components) with new code that has been tested with IE10. I hope that the problems you've experienced have now been resolved, but I'll also perform some additional testing over the forum itself using IE10 to make sure there aren't any issues that have been missed.
The website upgrade coincided with the release of NCrunch v1.42, which includes new configuration options allowing you to change the background colours of all the NCrunch tool windows to better bring them in line with the new VS2012 dark theme. 100% alignment with the Visual Studio UI is something that will need to be treated as a long term goal for NCrunch, although with v1.42 being the first commercial release, stability is currently considered a higher priority.
NCrunch is designed to report test timeouts via the text output associated with a test. If your tests are outputting large amounts of trace information it's quite likely that the timeout messages are residing at the bottom of the test output. If for some reason these messages aren't showing, please let me know which test framework you're using and I'll take a closer look.
Fortunately, the most serious issue you've encountered should also be the easiest to solve. When configured with parallel execution enabled, the current version of NCrunch can often execute long running tests concurrently with themselves. This might sound crazy, although in certain situations it does make good sense - if you kick off a long running test, then make a change to your codebase, the codebase change would result in a fresh execution of the same test with the new version of the codebase. It's recognised that this causes many issues and thus this logic is expected to become configurable in future (with the default setting favouring compatibility over performance), although for the time being there are broadly two different ways you can solve this problem:
1. Engineer your tests to make use of random file/directory names - Performance-wise, this is the best solution as it allows NCrunch to continue to run the tests concurrently with themselves, although feasibility may depend upon how your test projects are structured. If the tests all descend from a common base test fixture, you may be able to adjust the base class to create a random directory and set this as the current directory during test fixture setup. Always make sure that any randomly generated files/directories are deleted by the test on teardown (otherwise you may accumulate junk in your workspace over time).
2. Adorn the tests with NCrunch attributes to control their concurrency - NCrunch allows you to control (via code-level attributes) how the tests should be executed concurrently. Usually this is handled with the
ExclusivelyUsesAttribute. If two tests exist with the same exclusively used resource, NCrunch will avoid running them concurrently. This also works for tests being run concurrently with themselves - a test with a uniquely declared exclusively used resource will never be run concurrently with itself.
I hope this helps you.
Thanks!
Remco