Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

data-driven xunit tests (Theory) expansion?
jamesmanning
#1 Posted : Monday, May 11, 2015 9:24:10 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/7/2012(UTC)
Posts: 17
Location: Raleigh, NC

Was thanked: 1 time(s) in 1 post(s)
Currently I have a data-driven unit test that with about 30 InlineData attributes. When I make a change that breaks some number of the tests, the NCrunch Tests window will show the test as failing, but it gives no indication as to the data-driven part of it. IOW, was it 1 failure out of 30? 30 out of 30? something else?

Not knowing the particular number also makes it more difficult to tell whether I'm making things better or worse when I make more changes.

Is there some option/setting I'm missing to get more info about the particular 'subtests' for data-driven xunit tests (the TheoryAttribute)?

On a related note, the output pane of the NCrunch Tests window lists a line for each 'subtest' and after each will should 'Failures' for that test, but with 30 subtests, it's a little difficult to deal with since I have to scroll through the list each time to find the failures. This is with the 'show passing tests' option unchecked, so if the success cases could not show up in that output when 'show passing tests' is turn off, it would make it much easier to deal with that output pane as it would then only show the failures.

Thanks!
Remco
#2 Posted : Monday, May 11, 2015 9:38:51 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Hi James,

Thanks for sharing this. Can you share any sample code with me demonstrating this? Sorry I'm having trouble understanding/reproducing the problem.
jamesmanning
#3 Posted : Monday, May 11, 2015 9:44:35 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/7/2012(UTC)
Posts: 17
Location: Raleigh, NC

Was thanked: 1 time(s) in 1 post(s)
Good point - I'll put together a quick sample now. :)

The original source I was hitting this on is in this repo if you want to clone it and see it yourself, but I'll make a quick solution showing just the issue as well.

https://github.com/james...yEqualTests.cs#L68-L128

Thanks!
Remco
#4 Posted : Monday, May 11, 2015 9:52:35 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Thanks for this. Just to clarify - when NCrunch detects this test, does it give you a single test for each set of InlineData, or just one test for the whole theory? (as visible in the Tests Window)
jamesmanning
#5 Posted : Monday, May 11, 2015 9:58:21 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/7/2012(UTC)
Posts: 17
Location: Raleigh, NC

Was thanked: 1 time(s) in 1 post(s)
Just the one Theory test - I broken 2 of them here, but nothing in the output (except scrolling through the text output) indicates that 2 of the subtests failed AFAICT

showing 1 failure for data-driven
jamesmanning
#6 Posted : Monday, May 11, 2015 10:11:41 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/7/2012(UTC)
Posts: 17
Location: Raleigh, NC

Was thanked: 1 time(s) in 1 post(s)
So it looks like the output in the main pane is due to me 'pinning' one of the failed (sub)tests at some point, so I guess that explains the output including every subtest run (although it'd still be nice to only see the subtest failures).

I've uploaded a simple repro case to https://github.com/james...data-driven/SomeProject

The meaningful file being https://github.com/james...ject/DataDrivenTests.cs

With this repro case and no pinned tests, the NCrunch Tests window still doesn't reflect the number of 'subtest' failures, though, AFAICT?

no subtest failure count?
Remco
#7 Posted : Monday, May 11, 2015 10:29:08 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Ok, this is all making sense now.

Broadly, there are two ways to represent a theory:

1. By splitting each test case of the theory independently, resulting in many 'tests'
2. By treating the entire theory as one 'test', and rolling all the results up into this one test.

When working with Xunit v1, NCrunch will use the second method. This is because of the way that the Xunit adapter inside Gallio works.
When working with Xunit v2, normally the first method will be used, although in some scenarios Xunit will override this behaviour and use the second method (for example, if the test-case parameters contain custom classes that can't be used to represent the test name under static analysis).

In this situation, you're compiling against Xunit v1.9.2, so the results are as expected. I recommend upgrading to Xunit v2 if you'd like to see better granularity in your test case reporting.
jamesmanning
#8 Posted : Monday, May 11, 2015 10:42:18 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/7/2012(UTC)
Posts: 17
Location: Raleigh, NC

Was thanked: 1 time(s) in 1 post(s)
oh, wow, I didn't even consider it being tied to the xunit version! I remember I had to downgrade from 2.0 to 1.9.2 for some other reason (unrelated to NCrunch), but now that I know it's fixed with xunit 2, I'm definitely going to fix the other issue so I can get the granular results! :)

Thanks!!
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.048 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download