Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Displaying paramaterised tests
Magbr
#1 Posted : Monday, January 25, 2016 11:59:51 PM(UTC)
Rank: Member

Groups: Registered
Joined: 1/25/2016(UTC)
Posts: 29

Thanks: 5 times
Was thanked: 1 time(s) in 1 post(s)
Hi, how can I display parameterised XUnit tests as separate tests in the tests window like they're displayed in Re-sharper for example?
Remco
#2 Posted : Tuesday, January 26, 2016 3:05:10 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi, thanks for posting!

When working with Xunit V1, NCrunch is integrated via Gallio, which automatically bundles parameterised tests into singular tests so it can make sense of them.

From Xunit V2, NCrunch supports separation of parameterised tests implicitly. So if you upgrade to Xunit V2, you'll get them :)
Magbr
#3 Posted : Tuesday, January 26, 2016 7:07:32 PM(UTC)
Rank: Member

Groups: Registered
Joined: 1/25/2016(UTC)
Posts: 29

Thanks: 5 times
Was thanked: 1 time(s) in 1 post(s)
Hi, I am allready using Xunit 2.1.0 and NCrunch 2.19.0.4! I'm using autofixtures InlineAutoData attribute if that plays a role.
Remco
#4 Posted : Tuesday, January 26, 2016 9:43:50 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Magbr;8270 wrote:
Hi, I am allready using Xunit 2.1.0 and NCrunch 2.19.0.4! I'm using autofixtures InlineAutoData attribute if that plays a role.


Sorry, can you clarify a bit further on how your expecting the tests to be displayed?

Are you expecting an extra grouping at method level?
Magbr
#5 Posted : Tuesday, January 26, 2016 10:25:12 PM(UTC)
Rank: Member

Groups: Registered
Joined: 1/25/2016(UTC)
Posts: 29

Thanks: 5 times
Was thanked: 1 time(s) in 1 post(s)
Remco
#6 Posted : Tuesday, January 26, 2016 11:02:30 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Thanks for clarifying!

Can you confirm that you have your 'Framework utilisation type for Xunit' set to DynamicAnalysis?

In most cases, NCrunch should expand the test cases to their individual cases. I'm not sure at the moment why this isn't the case here - if you can share any sample code, I can look a bit deeper.
Magbr
#7 Posted : Tuesday, January 26, 2016 11:06:55 PM(UTC)
Rank: Member

Groups: Registered
Joined: 1/25/2016(UTC)
Posts: 29

Thanks: 5 times
Was thanked: 1 time(s) in 1 post(s)
Yeah, dynamicanalysis for xunitv2. If you want some log files or anything feel free to ask!

Code:

        [Theory]
        [InlineAutoData(HttpStatusCode.Conflict, "test1", "test1")]
        [InlineAutoData(HttpStatusCode.OK)]
        public async Task UserCreationTest(HttpStatusCode expected, string username, string email)
        {
            var sut = new StorageController();
            sut.Request = new HttpRequestMessage();
            sut.Configuration = new HttpConfiguration();
            var reg = new RegistrationRequest()
            {
                Username = username,
                Email = email,
                PhoneNumber = "00113344"
            };
            var result = await sut.CreateUser(reg);
            Assert.Equal(expected, result.StatusCode);
        } 
Remco
#8 Posted : Tuesday, January 26, 2016 11:25:23 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Thanks for sharing this. It looks like the issue is being caused by an integration problem caused by AutoFixture. It looks as though AutoFixture changes the behaviour of Xunit somehow in this situation, and Xunit/NCrunch is using a fallback method to ensure the test can still be run.

I'll need to look into this in more detail. Thanks for bringing it to my attention.
Remco
#9 Posted : Thursday, March 3, 2016 4:42:02 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
I've completed my investigation of this problem and concluded that it is, unfortunately, by design.

Xunit v2 has a concept known as pre-enumeration, which basically involves the granular discovery of theory test cases during test discovery. Pre-enumeration allows NCrunch to detect theory test cases well in advance of their execution, in turn allowing them to be shown in the tests window, split between test tasks, run in parallel, distributed, etc.

AutoFixture specifically disables pre-enumeration under Xunit2, for good reason. The data AutoFixture inserts into test case parameters is not 'stable'. For example, it may contain GUIDs and other inconsistent data. When tests are discovered prior to execution, they must have consistent parameters that can be used uniquely identify them so that the data from their later execution runs can be tied back to the test. Simply speaking, theory test cases using AutoFixture cannot have their data correlated between their discovery and execution.

The reason this problem is visible under NCrunch but not under other runners is because NCrunch handles the lifespan of a test very differently. If you run an AutoFixture theory using the Xunit2 console runner, you'll find that the console runner will still report the individual test cases inside the execution output. For a console runner, it is a simple thing to just report the test run as it happens, as there is no requirement for the reported tests of the run to be an exact match of what may have been discovered earlier.

Likely other test runners simply dynamically generate new test cases to hold the results as they encounter them during execution. Unfortunately, such a solution is fundamentally incompatible with a number of NCrunch's features, such as parallel execution, dynamic execution order and distributed processing. NCrunch relies on having a 'strong' model in which each test is unique and have its data correlated across all execution runs. The idea of a 'dynamic' test case that could have a different signature on each execution run is fundamentally incompatible with these concepts as they are implemented in the NCrunch engine.

A possible solution would be to introduce a 'cosmetic' approach of bundling up the results inside sub-tests so that they can be reported under the main theory inside the Tests Window. While this might seem simple at first glance, there is huge hidden complexity here - for example, the individual sub-tests would need to have their own uniquely reported code coverage and performance data. It wouldn't be possible to run them individually outside of their main theory (as they cannot be uniquely identified), so many of the UI commands for them simply wouldn't work. They couldn't be ignored using NCrunch's 'ignored tests' mechanism, as there is no reliable way to distinguish them from their siblings. Basically, it would be a horrible nightmare that would create a confusing and complex result, and simply isn't worth pursuing for such an edge case.

So I'm sorry, but this feature cannot be supported by NCrunch.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.050 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download