Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Inconclusive result in NUnit TestCaseSource
trailmax
#1 Posted : Wednesday, December 11, 2013 2:16:18 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 12/11/2013(UTC)
Posts: 3
Location: United Kingdom

Hi

I'm implementing Autofixture.Autodata attribute for NUnit (http://gertjvr.wordpress.com/)
So I had to switch NCrunch from DynamicAnalysis to StaticAnalysis on the solution level configuration.

And one of the tests turns red with "result inconclusive". The test was using NUnit TestCaseSource.
Took me a while to figure out the problem: It was .SetName() method called. See the failing test cases:


// switch NCrunch into Static Analysis for NUnit

private static IEnumerable<TestCaseData> Inconclusive()
{
yield return new TestCaseData(true).Returns(true)
.SetName("inconclusive"); // << -- This causes problems
}


[TestCaseSource(typeof(AutoDataExample), "Inconclusive")]
public bool Inconclisiveness(bool boolean)
{
// This test will become inconclusive in NCrunch
return boolean;
}


private static IEnumerable<TestCaseData> Conclusive()
{
yield return new TestCaseData(true).Returns(true);
}


[TestCaseSource(typeof(AutoDataExample), "Conclusive")]
public bool Conclusiveness(bool boolean)
{
// this test works fine because there is no call to .SetName()
return boolean;
}


I think this is a bug. The same behavior was in latest 1.x version and in 2.1 beta.

Thanks,
Max
Remco
#2 Posted : Wednesday, December 11, 2013 10:25:50 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Hi Max -

Thanks for sharing this issue. NCrunch's static analysis doesn't work with NUnit TestCaseSource, as this attribute requires execution of user code in a runtime domain in order to resolve tests.

TestCaseSource actually was the main motive for introducing the Dynamic Analysis, as there simply was no way to make Static Analysis work with it.

Can you share any details about why you needed to set the engine to StaticAnalysis for Autofixture.Autodata? Perhaps there is something I can do to improve support here, or provide you with a different workaround.


Cheers,

Remco
trailmax
#3 Posted : Wednesday, December 11, 2013 10:49:28 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 12/11/2013(UTC)
Posts: 3
Location: United Kingdom

Remco,

I've been following the instructions from here: http://gertjvr.wordpress...2-working-with-ncrunch/
With DynamicAnalysis and tests with AutoData attribute are not running:

class MyTest
{
[Test, AutoData]
public void SimpleParameters_AreRandom(int random, String guid)
{
// red here with exception in NCrunch window
Assert.AreNotEqual(0, random);
Assert.IsNotNullOrEmpty(guid);
}
}

The message I see in NCrunch (v2.1) is "The parent fixture of this test experienced a failure during test execution". There is nothing else to the test class, no other set up or any fixture parts.


Funny enough, this problem only comes when as a parameter I have a primitive type. If it is a class, NCrunch is fine with it:

class MyOtherTestClass
{
[Test, AutoData]
public void GivenClass_NCrunchWorks(Product product)
{
// This one works fine
Assert.IsNotNull(product);
}

[Test, AutoData]
public void PrimitiveTypes_CauseException(Product product, int number)
{
// And this is red
Assert.AreNotEqual(0, number);
}
}

class Product
{
public String Name { get; set; }
}


Seems like I've found a lot of edge cases that nobody apart from us uses -)

I can create a small solution with reproduction if that can help you.
Remco
#4 Posted : Thursday, December 12, 2013 12:30:40 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Thanks Max, I've managed to reproduce the issue based on your description and the code you've provided.

The AutoFixture.AutoData feature works by randomising the values being passed into the test methods. These values themselves are reported by NUnit when NCrunch interrogates it during the dynamic analysis step. They make up the name of the test and therefore are critical in being able to identify the test to allow it to later be executed.

When NCrunch then later calls back into NUnit in order to run the tests, the names of these are again randomised by NUnit/AutoData and they do not match with the pre-reported names from NCrunch's analysis step. Thus the random parameter data being provided by AutoData causes NCrunch's dynamic analysis to suffer from the same issues as it does when using the NUnit Random attribute.

I'm afraid I can't find any feasible way to make this work using dynamic analysis. The correct answer is to use static analysis, which does unfortunately rule out the use of the TestCaseSource attribute. Sorry, but it looks like this is a structural limitation that I cannot provide a technical solution for.


Cheers,

Remco
trailmax
#5 Posted : Thursday, December 12, 2013 12:57:22 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 12/11/2013(UTC)
Posts: 3
Location: United Kingdom

Thanks for the analysis, Remco.
Nothing critical there. We'll use static analysis and work around the TestCase problem. Seems like AutoData is not suited well for NUnit: Resharper 8.1 is having problem with these tests as well -(

I'll see how I can migrate all our tests to XUnit - seems a better suited solution for the purpose.
Ralf Koban
#6 Posted : Saturday, December 6, 2014 5:10:17 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/19/2014(UTC)
Posts: 44
Location: Germany

Thanks: 4 times
Was thanked: 10 time(s) in 9 post(s)
Hi Remco,

today I experienced some related issue when using NUnit's ValueSource attribute. I wrote you a report (but I'm not sure whether it got through as it stated that it could not be submitted).
The issue I faced was using some dynamic data (I was using DateTime.Now) as a value source.

So NCrunch reported an "The parent fixture of this test experienced a failure during test execution" error.

It would be nice if NCrunch could give some hints on what could go wrong, such as using dynamic data on TestCaseSource or ValueSource which changes the names of the tests so that NCrunch cannot find them anymore.
Benefit would be to get some more concrete feedback on why it fails to be able to fix it.

Best regards,
Ralf
Remco
#7 Posted : Saturday, December 6, 2014 7:40:43 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Hi Ralf,

Sorry for the trouble this problem gave you.

Unfortunately, outside of introducing 'magic' to deal with cases such as this, there is little NCrunch can do about them. When working with DynamicAnalysis, NCrunch relies on NUnit itself to specify the test name, which also forms the only way to uniquely identify the test. When a component of the test name includes an inconsistent element (such as a random number or the current time), the only knowledge NCrunch has about the situation is that NUnit is unable to later discover the test when it tries to execute.

I'll make a note to see if the error message itself can be revised to give people more information about how the problem can occur. I believe at the moment the error is very ambiguous and suggests that this can only be caused by an internal problem.


Cheers,

Remco
CharliePoole
#8 Posted : Wednesday, January 28, 2015 1:55:21 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 1/28/2015(UTC)
Posts: 2
Location: United States of America

Was thanked: 1 time(s) in 1 post(s)
I noticed this while browsing here for another reason and thought I'd mention the following for whomever it may help - i.e. users or the developer of the runner.

In NUnit we do _not_ provide a unique test name. In fact, we designed it so that test names don't have to be unique. This causes problems with VS test explorer and other runners that assume a test name is unique.

There are three ways test names may not be unique within NUnit.

1. The same namespace, class name or method name appears in different assemblies that are loaded together. Not a problem for runners that deal with one assembly at a time.

2. Tests with parameters use the actual value in the test. If the user creates duplicates, then there are duplicate tests.

3. Users may change the test name for parameterized cases, creating another possible source of duplicates.

NUnit does provide a UniqueName for runners that need it, but most do not because it doesn't look sufficiently user-friendly.
Remco
#9 Posted : Wednesday, January 28, 2015 2:43:37 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Thanks Charlie. Last year NCrunch was updated to include decoupling between the test name and the test identifier (which needs to be unique). This allows it to deal with cases such as tests that exist in different projects but with the same namespace. At the time, I didn't see any way to apply this to tests under the same name using TestCaseSource, though from your post I assume that the NUnit UniqueName may hold the key to this. I'll take another look and will follow up with you when I understand more about how this works.

Cheers,

Remco
CharliePoole
#10 Posted : Wednesday, January 28, 2015 5:30:24 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 1/28/2015(UTC)
Posts: 2
Location: United States of America

Was thanked: 1 time(s) in 1 post(s)
Yes, TestCaseSource was the first feature that basically made it impractical for any runner to just "figure out" the tests by looking at the assembly. You really have to execute the code. Other features like that are Pairwise combinations of parameter values (because it's non-deterministic) and to some extent, RandomAttribute. It's likely there will only be more of such things, so we encourage everyone to rely on nunit to find the tests itself. I think that's what you call "dynamic" discovery and it's what we do ourselves in our VS adapter for TestExplorer.

The UniqueName of a test consists of the FullName prefixed with an expression like (0-123) where the first digit represents the assembly and the second is an arbitrary id for the test within that assembly. This is all encapsulated in the TestName of a test. NUnit 3.0 changes this substantially but still has a unique id for each test.
1 user thanked CharliePoole for this useful post.
Remco on 1/28/2015(UTC)
nrjohnstone
#11 Posted : Wednesday, July 1, 2015 7:56:11 PM(UTC)
Rank: Member

Groups: Registered
Joined: 7/1/2015(UTC)
Posts: 12
Location: New Zealand

Thanks: 1 times
Was thanked: 2 time(s) in 2 post(s)
Hi guys, I thought I'd mention that I've started using AutoFixture and was using the AutoData attribute to supply data to a NUnit test and was bumping into this same issue. However, adding the [Frozen] attribute to the parameters that were being randomized fixed the problem as this locks the value in place for that type.

eg.

[Test]
[AutoData]
public void UsingAutoDataAttribute([Frozen] int expectedNumber, EchoClass sut)
{
}

Unfortunately it means that if you have multiple parameters of the same type, they will get the same value, but it does solve the problem of the test name changing between the analysis and execute phase.

A better way might be to extend the AutoData attribute to take a string for the test name, and then set the test name in the type it is returning to NUnit (if using TestCaseData then just SetName()). That will remove the randomness from the auto generated test name.

1 user thanked nrjohnstone for this useful post.
Remco on 7/1/2015(UTC)
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.103 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download