Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Issues with NUnit TestCaseSource
jonskeet
#1 Posted : Wednesday, October 19, 2011 6:49:32 PM(UTC)
Rank: Member

Groups: Registered
Joined: 9/25/2011(UTC)
Posts: 13
Location: Reading, UK

Was thanked: 3 time(s) in 3 post(s)
I've just downloaded NCrunch 1.35.0.16b, and have come up against two issues with TestCaseSource handling:

1) If there's no test data for a particular test, that counts as a full-on failure in NCrunch, whereas it's more like an "ignored" test in NUnit itself.
2) If the test data implements ITestCaseData and TestName returns a non-null value, that appears to force a failure. This is most easily seen by calling SetName on TestCaseData. Sample code:

Code:
using NUnit.Framework;

namespace NCrunchSandbox
{
    [TestFixture]
    public class TestCaseSourceTest
    {

        public static readonly TestCaseData[] Working = { new TestCaseData("foo") };
        public static readonly TestCaseData[] Failing = { new TestCaseData("foo").SetName("Bang") };

        [Test]
        [TestCaseSource("Working")]
        public void WorkingTest(string x)
        {
        }

        [Test]
        [TestCaseSource("Failing")]
        public void FailingTest(string xc)
        {
        }
    }
}

Remco
#2 Posted : Thursday, October 20, 2011 6:49:16 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 960 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi Jon,

Thanks for posting. The problems you're describing are caused by NCrunch's inability to determine (and display) a difference between an inconclusive test result and a genuine failure. Right now (for the sake of simplicity) NCrunch doesn't have an 'inconclusive' status, so the UI simply reports these tests as being failures. The particularly unintuitive aspect of this is that NUnit doesn't attach any additional messages explaining the reason for a test being inconclusive, so NCrunch simply shows a red X without any description.

You'll notice this same issue applies to other NUnit features that rely on reporting an inconclusive status.

I do have plans to change the way this works in a future build (so that an inconclusive result can be seen and filtered in the UI).

Thanks for providing a code sample. This is always a huge help :)


Cheers,

Remco
jonskeet
#3 Posted : Thursday, October 20, 2011 7:52:06 AM(UTC)
Rank: Member

Groups: Registered
Joined: 9/25/2011(UTC)
Posts: 13
Location: Reading, UK

Was thanked: 3 time(s) in 3 post(s)
Remco;542 wrote:

Thanks for posting. The problems you're describing are caused by NCrunch's inability to determine (and display) a difference between an inconclusive test result and a genuine failure. Right now (for the sake of simplicity) NCrunch doesn't have an 'inconclusive' status, so the UI simply reports these tests as being failures. The particularly unintuitive aspect of this is that NUnit doesn't attach any additional messages explaining the reason for a test being inconclusive, so NCrunch simply shows a red X without any description.


That explains the first issue - it doesn't explain why test case data with names cause problems though, does it? To be honest that's more of a concern for me now, as it really affects the usability of test cases.

Jon
Remco
#4 Posted : Thursday, October 20, 2011 8:19:03 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 960 times
Was thanked: 1290 time(s) in 1196 post(s)
Hmmm... What kind of result would you normally expect from a name mismatch in the test case data?
jonskeet
#5 Posted : Thursday, October 20, 2011 8:56:19 AM(UTC)
Rank: Member

Groups: Registered
Joined: 9/25/2011(UTC)
Posts: 13
Location: Reading, UK

Was thanked: 3 time(s) in 3 post(s)
Remco;544 wrote:
Hmmm... What kind of result would you normally expect from a name mismatch in the test case data?


It's not a name mismatch. It's giving the test data item a name so that it can show up in the list of tests. It's metadata, effectively. The test code I gave you should pass with no problems.
Remco
#6 Posted : Thursday, October 20, 2011 8:19:20 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 960 times
Was thanked: 1290 time(s) in 1196 post(s)
Sorry - I haven't had much exposure to TestCaseSource in the past, so it's a learning experience for me :)

I've had a deeper look at how NUnit behaves in this situation and I'm afraid I can't give a simple solution to this problem.

Essentially, NCrunch uses static metadata in order to locate tests within built assemblies. This gives opportunities for optimising the test discovery process, though it has a serious drawback in the sense that it becomes nearly impossible to identify test metadata that is dynamically constructed at runtime (such as test aliases defined by .SetName). Other test runners that use the metadata approach (such as Resharper) also suffer from this problem. The net effect is that the tests cannot be matched with NUnit's internal discovery system at runtime and they simply won't be executed.

I have some plans for changes in this area that should make dynamic discovery possible, though it will take some effort and won't be a quick fix.

Sorry about this. I wish I had a better answer or a workaround, though the best I can suggest is to keep an eye out and know that this will be supported in future.


Cheers,

Remco
jonskeet
#7 Posted : Thursday, October 20, 2011 10:25:43 PM(UTC)
Rank: Member

Groups: Registered
Joined: 9/25/2011(UTC)
Posts: 13
Location: Reading, UK

Was thanked: 3 time(s) in 3 post(s)
Remco;547 wrote:
Sorry about this. I wish I had a better answer or a workaround, though the best I can suggest is to keep an eye out and know that this will be supported in future.



No problem - let me know if I can help out at all. If you want to try to Noda Time tests, I can help you set it up how it was before the revision which fixed things for NCrunch.

Jon
Remco
#8 Posted : Friday, October 21, 2011 6:18:53 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 960 times
Was thanked: 1290 time(s) in 1196 post(s)
I'm always on the lookout for good test solutions, so that would really be useful. Is there is a specific revision number/id that I can grab which is unfixed?
jonskeet
#9 Posted : Friday, October 21, 2011 6:37:05 AM(UTC)
Rank: Member

Groups: Registered
Joined: 9/25/2011(UTC)
Posts: 13
Location: Reading, UK

Was thanked: 3 time(s) in 3 post(s)
Remco;549 wrote:
I'm always on the lookout for good test solutions, so that would really be useful. Is there is a specific revision number/id that I can grab which is unfixed?


Sure - this is the revision before the fix:
http://code.google.com/p...b6fdc2061e57d6f14239e54

The fixes are in
http://code.google.com/p...bb1d33c1cf835a321bfcca6
if you want to see what I changed.

(It's possibly easiest to just sync to head, and then update "back" to the broken revision - the fix revision mentions NCrunch in the description, so it should be easy to spot, and just update to the one before :)
1 user thanked jonskeet for this useful post.
Remco on 10/21/2011(UTC)
Remco
#10 Posted : Monday, December 19, 2011 8:19:26 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 960 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi Jon,

Thanks for providing the links to your working solution. This was enormously helpful for me in making some critical changes to the way the NUnit integration works for NCrunch 1.36b.

1.36b (just released) allows you to consume NUnit using dynamic analysis instead of static analysis. This adds much closer support for many of the NUnit 2.5 features (such as TestCaseSource), and I believe it should resolve any problems you've experienced in this area.

I did notice in the NodaTime solution that you have many thousands of tests being generated dynamically from TestCaseSource. This doesn't cause any functional problems for NCrunch, but it does create a lot of work, as the new dynamic analysis allows every test case to be tracked individually with its own code coverage results. You may be able to reduce your test cycle times considerably by grouping some of the test cases together so that they don't need to be tracked separately.

Anyway, I'm eager to hear how well this works for you.


Cheers,

Remco
jonskeet
#11 Posted : Monday, December 19, 2011 8:35:54 AM(UTC)
Rank: Member

Groups: Registered
Joined: 9/25/2011(UTC)
Posts: 13
Location: Reading, UK

Was thanked: 3 time(s) in 3 post(s)
Remco;770 wrote:

Thanks for providing the links to your working solution. This was enormously helpful for me in making some critical changes to the way the NUnit integration works for NCrunch 1.36b.


Well if I'm going to ask you to make some changes, the least I can do is a bit of investigation :)

Remco;770 wrote:

1.36b (just released) allows you to consume NUnit using dynamic analysis instead of static analysis. This adds much closer support for many of the NUnit 2.5 features (such as TestCaseSource), and I believe it should resolve any problems you've experienced in this area.


I've just removed the comments in the code saying "uncomment this code when NCrunch is fixed" - and it works brilliantly, with nice descriptive test names. Awesome work!

Remco;770 wrote:

I did notice in the NodaTime solution that you have many thousands of tests being generated dynamically from TestCaseSource. This doesn't cause any functional problems for NCrunch, but it does create a lot of work, as the new dynamic analysis allows every test case to be tracked individually with its own code coverage results. You may be able to reduce your test cycle times considerably by grouping some of the test cases together so that they don't need to be tracked separately.


Can you give any more details about what sort of grouping you mean? It's actually pretty handy to have different coverage for the different paths in some cases, as the different cultures (which is a large part of the "many tests") go through different paths.

Remco;770 wrote:

Anyway, I'm eager to hear how well this works for you.


Fabulously, basically :) I happened to give a talk last Thursday where I gave a quick demo of NCrunch before moving on to the main topic of C# 5. Seemed to go down well :)

Small feature request: would the number of passing/failing/ignored tests be possible, either in the Risk/Progress chart or in the top line of the NCrunch Tests window? Let me know if you'd like this in a separate post for tracking purposes.
Remco
#12 Posted : Monday, December 19, 2011 8:59:03 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,145

Thanks: 960 times
Was thanked: 1290 time(s) in 1196 post(s)
Good to hear! I'd be lying if I said that NUnit didn't put up a fight on this one, but persistence pays as always :)

For the grouping, I was thinking that you may be able to reduce the number of test cases by finding common elements between them, then putting those common elements inside a single test case that simply performs the same test a number of times with different parameters... so a manual TestCase, I guess you could call it. The obvious drawback would be that coverage for all the test cases would be represented under a single test, but the advantage would be faster response times. I guess this is something you could easily weigh up based on your development needs. If you have enough CPU cores to spread out the load, it may not be worth your while.

I have some sketchy plans at the moment for a new progress indicator that is mounted on the status bar of the IDE, and giving on-the-fly numbers is one of the main focuses of this feature. 1.38b is expected to be mostly about UI features, so I'll see what I can do here.

Thanks for helping to spread the word about NCrunch!
jonskeet
#13 Posted : Monday, December 19, 2011 9:19:16 AM(UTC)
Rank: Member

Groups: Registered
Joined: 9/25/2011(UTC)
Posts: 13
Location: Reading, UK

Was thanked: 3 time(s) in 3 post(s)
Remco;773 wrote:
Good to hear! I'd be lying if I said that NUnit didn't put up a fight on this one, but persistence pays as always :)


The results really are great.

Remco;773 wrote:

For the grouping, I was thinking that you may be able to reduce the number of test cases by finding common elements between them, then putting those common elements inside a single test case that simply performs the same test a number of times with different parameters... so a manual TestCase, I guess you could call it. The obvious drawback would be that coverage for all the test cases would be represented under a single test, but the advantage would be faster response times. I guess this is something you could easily weigh up based on your development needs. If you have enough CPU cores to spread out the load, it may not be worth your while.


The bigger downside IMO would be making it harder to debug one specific culture, which I've had to do a few times. Conditional breakpoints aren't as nice as being able to rerun one single route through the code easily. I don't know about other Noda Time team members, but my laptop is a quad-core i7, so it's nice to give it something to do :)

Remco;773 wrote:

I have some sketchy plans at the moment for a new progress indicator that is mounted on the status bar of the IDE, and giving on-the-fly numbers is one of the main focuses of this feature. 1.38b is expected to be mostly about UI features, so I'll see what I can do here.


Cool - no hurry, and have a great holiday. Thanks for pushing this one out before Christmas :)
1 user thanked jonskeet for this useful post.
Remco on 12/19/2011(UTC)
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.097 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download