ssg;8780 wrote:
That removes the need to override ToString() in the actual code. I still wonder though, how NUnit could not have come up with a "sequence" based identification for data-driven tests? That would be more than enough between builds and debug attempts. We don't need a universally unique identifier, a sequence would work almost 99.999% of the time. MBUnit did not even support individual running of data driven tests. So it's nice to have the ability to execute/debug individual test cases, but I'm almost always ok with running the wrong test when I edit rows in the next build.
Sequenced based identification unfortunately doesn't work when trying to correlate state between test sequences.
Let's say, for example, you have 3 test cases each with the same name (I've given each a number suffix so we can tell them apart, but assume for the sake of the runner that they show the same)
TestCase1
TestCase2
TestCase3
And you run these test cases. The runner uses sequence IDs, which are used to map their results in NCrunch. Each test case then has coverage results, trace output, and a pass/fail flag.
Then, you choose to add a new test case to the top of the suite:
TestCase0
TestCase1
TestCase2
TestCase3
Now NCrunch runs your discovery step. When the discovery step has finished, suddenly, ALL the code coverage data collected from TestCase1 is now assigned to TestCase0. The same goes for the pass/fail result and the trace output. TestCase0 hasn't even been run yet, so this is technically very wrong. The other test cases likewise have their results entirely skewed.
I grant that in the majority of cases, you won't see this. Most likely you'll write the test, get the test working, then it'll sit in your suite and may never be updated again. So the 99.999% of the time estimation you've given may in fact be correct, but in the 0.001% that this isn't the case, we actually care very much, because NCrunch is now providing you with a result that is dangerously misleading.
You could make the argument that the coverage and pass/fail results don't matter between the individual test cases, but in this case, I would ask why you would be using TestCaseSource anyway. If the test cases cannot be make visually distinctive and can't have their own safely correlated result sets, why not just bundle them into a single test with a loop?