Ouch, thanks for the fast reply.
That's gonna be a big one, so have a coffee :)
>>If I understand what you're asking for, the ideal feature to solve this would be one where the NCrunch engine is able to duplicate a test multiple times inside the queue, with each instance being applied to a different grid node.
Yes, indeed!
>> When tests are run in different environments, they can behave differently.
Indeed they are.
That's a developer's responsibility to make sure that the codebase works well across environments, and same goeswith the regression test - check your environment if you really need to seup something.
The codebase of the software is the same, the test os the same and the expected bahaviour is the same.
Rarely, there is some stuff which is designed and compiled to be run onlu on XP/x64, but that' not a shared codebased which meant to be run on varopus environments. For shared codebased there is only one test shared across environments.
>>This can create some confusing results in the NCrunch UI, which is designed to only show one result per test (the most recent one)
Too bad for NCrunch :)
There is a room for improvement! Woohoo!
>> This could be problematic when one of the tests fails, but the others pass.
Expected bahaviour, that's excactly what we are looking for.
Shared codebased is meant to work on every single environment, so we extyremely keen to see such "Passed on 2 environments out of 3" in a nice looking grid view or something for failed tests and "Passed 3 out 3" for passed tests. Absolutely awesome!
Failures mean that some code does not work and needs to be fixed, so we can act accordingly.
>>To display the results meaningfully, NCrunch would need to somehow correlate the execution runs and produce an aggregate report tied to the same test.
Where and how much do we pay for that? :)
That's is true, "Test outcome" column seems to be either simple one storing one value or a complex one being green if all grid nodes are green or being red if one of the nodes fails. Aaaaa.... how much we want this! Can we write a plugin or something? :)
>>The situation could also become more confusing when considering the reliability of the network. It's possible that one of the VMs may be offline at the time the NCrunch tests are run.
That's fine. A grid node thing, not the test.
So, once connectivity is back, then the grid node is back, and then it might be reasonable update the results, is not it?
Refresh/rerunt this test against all nodes, people can live with that. S
>> If NCrunch is to duplicate the tests across all connected nodes, an offline node would simply be a test case that is silently ignored.
Ignore != Passed, so, what's the issue?
2 out of 3 passed, one is ignoired, so total across 3 VMs can't be green. That's fine.
>> the ideal solution becomes clear: These are, in fact, different tests.
Different from the developer perspestive? Not too sure.
We have 700+ regression tests which are run under 3+ environments right now.
How? Manually. Have 3+ VMs with VS, then get the latest, compile, run the tests, fix, and repeat.
Then RDP to the next one VM, same same.
Now, having these "RequiresCapability" or any other kind of attrs and (oh boyyy) inheritance wins the day.
700+ across 10+ environment with RequiresCapability and inheritance? Really?
Having sdaid that, these tests are just about funtionality, only to keep regression.
We expect to add "performance tests" to find out what works fast/slow and on which environments, plus some test for concurency/multithreading implementation. That seems to raise the bar up to 1000 tests till the end of the yeat, all of which have to be run against several environments/VMs.
Now, we surely can use T4/ReSharper/PS to generate all the tests for all the envrieonments with RequiresCapability and the otehr stuff.
Would it be a solution we need to go?
Btw, Visual Studio would go crazy with [RequiresCapability("WinXP")] and inheritance, would not it? It'd see x2/x3/x4 tests, then.
>> In this way, you'll have a new test for each platform, each with its own result. An advantage of this approach is that is scales well where many systems are involved. For example, you may have two Windows XP VMs that are each able to run the WinXP tests. These VMs would then be used efficiently by the processing queue to maximise parallel execution and minimise unnecessary processing.
That's the other thing regarding grid processing, you right.
Right now it is unknown how to setup this propertly - all the tests across several environment plus split up within envrionments.
Say, 600 tests need to be run on XP/Win7/Win8.
We have 9 VMs - 3 with XP, 3 with Win7 and 3 with Win8.
Now, we surely would be happy to split 600 into 2 batch per two VMs or later on per 3-4-6 VMs.
How'd we do that?
>>I think an ideal approach would somehow be to generate the node-specific tests. What does your test suite look like at the moment?
We have a mission critical software library, it comes in .NET45/40/35.
520+ regression tests, written once and designed to work on every tsupported environment.
We have 4 environments to support so far.
Pretty much, we have these 4 environments, get the latest, run tests and see what works and what not. Fix the issue, repeat.
It takes between 60-90 minutes per environment to get the whole test shuit run.
Now, we started exploring NCrunch and grid porocessing due to:
1) +2 new upcimong rnvironments in a few month from now, more might come later (these freaking enterprise software updates/CU/service packs)
2) Starting up a "refactoring" phase, refactor the core, but break a lot so run tests more intensively
3) Adding new two "big" features and expecting more tests added, guess 200-300 in a few months
4) "Hardering" already excisting tests - check more things, so excecution time expands
Again, all this is regression testing, there are a few unit test which take NOTHING to excecute.
Regresion tests are designed to make sure that the library works on a particular enterprise software, supports particular cases and does not break production. It's mission critical, so we don't fake/mock anything at all. Run strongly against a real enterprise software handling all bugs/issues/inconvinience and other things related to its service packs, updates and so on.
Am.. sorry for the big messages here and where, just wanted to provide more context around that.
Hope it give more visibility on our little issue. Might not see al the features of NCrunch, so absolutely open for ideas, thanks!
And still, excited to see NCrunch in actions. Terrific thing, truly!