Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Only one of multiple TestCaseSource attributes respected
kalebpederson
#1 Posted : Thursday, June 27, 2013 1:41:45 AM(UTC)
Rank: Member

Groups: Registered
Joined: 2/1/2012(UTC)
Posts: 25
Location: US

Thanks: 5 times
Was thanked: 3 time(s) in 3 post(s)
Hi, just a quick bug report.

If a NUnit test contains multiple TestCaseSource attributes, only one of the two is executed. For example:

Code:
[Test]
[TestCaseSource("DataSource1")]
[TestCaseSource("DataSource2")]
public void Children_property_is_empty(Variable variable)
{
	Assert.That(variable.Children, Is.Empty);
}


Thanks again for NCrunch!

--Kaleb
Remco
#2 Posted : Thursday, June 27, 2013 2:51:52 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Hi Kaleb -

Thanks for sharing this. I haven't been able to reproduce this issue myself. Do the testcase datasources produce tests under the same name? If so, NCrunch will only interpret results from the first one.

The code I used to test this is below. Does this code produce the same result for you?

Code:
    public class Class1
    {
		[Test]
		[TestCaseSource("source1")]
		[TestCaseSource("source2")]
		public void TestCaseSourceTest(int value)
		{
			Console.Write(value);
		}

		private static IEnumerable source1
		{
			get
			{
				return new object[] {
                    new TestCaseData(1).SetName("First"),
                    new TestCaseData(2).SetName("Second"),
                    new TestCaseData(3).SetName("Third"),
                };
			}
		}

		private static IEnumerable source2
		{
			get
			{
				return new object[] {
                    new TestCaseData(4).SetName("Fourth"),
                    new TestCaseData(5).SetName("Fifth"),
                };
			}
		}
	}
kalebpederson
#3 Posted : Thursday, June 27, 2013 5:11:47 AM(UTC)
Rank: Member

Groups: Registered
Joined: 2/1/2012(UTC)
Posts: 25
Location: US

Thanks: 5 times
Was thanked: 3 time(s) in 3 post(s)
Your description is very close. I'm not explicitly creating the TestCaseData as you're describing, but instead NC seems to take the name implicitly from the ToString() of the object:

Code:

	public class Data
	{
		public int Value { get; set; }
		public Data(int value)
		{
			Value = value;
		}
		public override string ToString()
		{
			return string.Format("Data={0}", Value % 2);
		}
	}

	public class Class1
        {
		[Test]
		[TestCaseSource("GetSource1")]
		public void TestCaseSourceTest(Data value)
		{
			System.Console.WriteLine(value);
			Assert.False(true);
		}

		private static IEnumerable<Data> GetSource1()
		{
			return new[] {new Data(0), new Data(1), new Data(2), new Data(3)};
		}
	}


It appears the test runs the correct number of times but it's simply not displaying. Thanks.

--Kaleb
Remco
#4 Posted : Thursday, June 27, 2013 6:55:58 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Thanks - this confirms the issue is related to the way that NCrunch uniquely identifies tests.

Where two tests exist with the same name, NCrunch will simply drop the second test. The rational behind this is that two tests under the same name cannot ever be uniquely identified by any user, so they don't really make much sense. There is a huge amount of logic in NCrunch tied to this, so I can't say I ever expect it to change in future. I recommend making sure that tests always have unique names. TestCaseSource does allow you to do this using the .SetName method as demonstrated above.

Cheers,

Remco
1 user thanked Remco for this useful post.
kalebpederson on 6/27/2013(UTC)
kalebpederson
#5 Posted : Thursday, June 27, 2013 1:28:20 PM(UTC)
Rank: Member

Groups: Registered
Joined: 2/1/2012(UTC)
Posts: 25
Location: US

Thanks: 5 times
Was thanked: 3 time(s) in 3 post(s)
I'd personally prefer to see behavior consistent with the NUnit test runner which is to run all the tests and simply display them with duplicates. Resharper's test runner does the same.

I'm probably going to change my ToString() method so I can distinguish between them, so do what you consider to be of most value.

Maybe it'd be beneficial to change the test icon to a warning icon and indicate that there were multiple indistinguishably named tests.

Thanks again.

--Kaleb
kalebpederson
#6 Posted : Thursday, June 27, 2013 1:54:27 PM(UTC)
Rank: Member

Groups: Registered
Joined: 2/1/2012(UTC)
Posts: 25
Location: US

Thanks: 5 times
Was thanked: 3 time(s) in 3 post(s)
I just improved my ToString() methods to find that NC considers the old tests as failures and gives the following message:

Quote:
This test was not executed during a planned execution run. Ensure your test project is stable and does not contain issues in initialisation/teardown fixtures.


I imagine that's because according to NC the tests no longer exist and NC can't tell that they no longer exist. I had to manually resynchronize in order to eliminate the failing tests, but it got worse from there. One of my variable types contains a DateTime value that it makes sense to have as part of a ToString display, but after running the test the first time I can no longer run or debug the test. Attempting to run or debug it just returns immediately. This will be quite problematic.

Sample of test names

--Kaleb
Remco
#7 Posted : Friday, June 28, 2013 1:52:49 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Is this DateTime value consistent between test runs? If it isn't, it will certainly cause problems.

NCrunch relies on tests having consistent names. This is because the tests need to exist and have their data correlated across multiple execution runs. This doesn't really work in situations where a test is given a different name every time it is discovered, as it creates an inconsistency between the test discovery and its execution. NCrunch will discover the test under one name, but when it then goes to later execute the test, the user code that generates the test name does so under a different name, so the test cannot be identified during its later execution run. There are similar problems with the NUnit Random attribute. Unfortunately there isn't really an ideal solution to this with the design of NCrunch - test names need to be consistent and unique in order for it to work correctly.
kalebpederson
#8 Posted : Friday, June 28, 2013 4:23:27 AM(UTC)
Rank: Member

Groups: Registered
Joined: 2/1/2012(UTC)
Posts: 25
Location: US

Thanks: 5 times
Was thanked: 3 time(s) in 3 post(s)
Remco;4305 wrote:
Is this DateTime value consistent between test runs? If it isn't, it will certainly cause problems.

...Unfortunately there isn't really an ideal solution to this with the design of NCrunch - test names need to be consistent and unique in order for it to work correctly.


No, the date time values aren't consistent across test runs, although in this case there's no reason they couldn't be. When possible, I like the extra bit of randomness that the current time gives but it sounds like I need to make it a standard practice to use the testdata objects and provide test names as you illustrated.

Thanks for the feedback.

--Kaleb
Remco
#9 Posted : Friday, June 28, 2013 7:34:18 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Thanks Kaleb. I've met people within the industry with strong opinions on tests always having consistent and deterministic behaviour. There are merits to this approach, but ultimately I feel you should do what works for you. Randomness within the scope of a test should be fine - you might notice the code coverage moving around as the different data patterns create a different flow through the program, but as long as the test names are consistent, all should be well.


Cheers,

Remco
kalebpederson
#10 Posted : Friday, June 28, 2013 12:50:49 PM(UTC)
Rank: Member

Groups: Registered
Joined: 2/1/2012(UTC)
Posts: 25
Location: US

Thanks: 5 times
Was thanked: 3 time(s) in 3 post(s)
Remco;4309 wrote:
Thanks Kaleb. I've met people within the industry with strong opinions on tests always having consistent and deterministic behaviour.


For unit tests, I agree with this. I actually use the current time as an indicator that the date/time doesn't matter nor change the flow of execution. It is in essence what I was implying when I said "when possible." In all other cases I'll stub the value to something appropriate.

Although I had realized that my objects were being ToString'd and that was used in the display of the tests, I had never considered the test data as a part of the name. That's probably an important point and leads to more carefully crafting test data, rather than throwing at a test a bunch of may-be-important pieces of data.

Thanks for the thoughts, and NCrunch!

--Kaleb
DanielM
#11 Posted : Monday, February 13, 2017 9:57:23 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 2/13/2017(UTC)
Posts: 9
Location: United States of America

Was thanked: 2 time(s) in 2 post(s)
I've read the posts both on this forum and NUnit's forum... In my case, I have multiple test fixtures that inherit from the same base Test Fixture. Each derived fixture uses [TestFixtureSource] that point to the same test data source. My test data source is a long string (a fragment of a text file) many of which are identical for the first 'x' characters. I decided to 'cheat' and implemented a work around. My solution (which seems to work so far) is to create an extension method that will add a guid prefix followed by a token value (something like "</UNIQUE>"). This way each test case data source starts with a is unique value. Then in the test fixture constructor I take the parameter value and strip out the "xxx-xxx-xxx-xxxx</UNIQUE>" (if present) from the test case data value and only use the actual value for the test to run on.

...to prefix the guild to the test data I use
return $"{Guid.NewGuid().ToString()}</UNIQUE>{testCaseData}";

...then in my constructor I do
public MyTestFixtureConstructor(string testCaseData)
{
if (!testCaseData.Contains("</UNIQUE>"))
return testData;

return testCaseDataSplit(new string[] { "</UNIQUE>" }, StringSplitOptions.None)[1];
}

My questions are
1) Is this something that NCrunch could possible do behind the scenes so we don't have to possibly resolving this issue for everyone.
2) I understand that adding n number of characters to a string could cause a very long string to overflow. I'm not sure what to do here...maybe a configuration setting would allow the user to turn off adding a unique test case data prefix
3) the solution I have right now generates a new guid for each test run. This seems to fix the issue (so all tests are run and no duplicates are identified) but are there other NCrunch related side affects of this solution...such as issues with NCrunch tracking tests accurately.

Thanks for the thoughts and the amazing tool!
Daniel
Remco
#12 Posted : Monday, February 13, 2017 11:30:17 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Hi Daniel, thanks for posting!

When creating tests dynamically using TestCaseSource (or TestFixtureSource), there are a couple of rules you must consider:

1. Each test must have a consistent name (randomly generated is not an option)
2. Each test must have a unique name (no more than one test with the same visible name)

These rules exist because of technical constraints caused by the way that test frameworks work. Breaking them will result in exceptions being thrown and/or some very strange, erratic behaviour appearing.

To understand this problem, imagine you are writing the code that calls into the test runner to tell it which test to run. If the test doesn't have a consistent name between discovery runs, how would you tell the runner which test to execute? If there are multiple tests with the same name, how do you know which results fit with each test?

Serial test runners don't really have this problem because they are able to interrogate a test assembly for tests and execute these tests within the same process/run. They are therefore able to use the memory addresses/objects that are generated during discovery to tell the tests apart. This doesn't work if you discover the tests using one process then try to execute them with another. I expect you'll find that your current solution will work within a limited scope under NCrunch because NCrunch will still re-use the test discovery process to execute tests, but as soon as a new process needs to be created (for example; for parallel execution, manual test execution, or a new session), NCrunch won't be able to identify the tests to run and you'll see a strange result.
DanielM
#13 Posted : Thursday, February 16, 2017 7:59:21 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 2/13/2017(UTC)
Posts: 9
Location: United States of America

Was thanked: 2 time(s) in 2 post(s)
Thanks that clears things up... it was a bit of a pain but I ended up getting a solution implemented. I modified the same extension methods I posted above to take a callerName (a string of some sort, testfixture name, or test case source name, that differs between test fixtures) and a Id. For each fixture type i'll pass in a unique starting id (0, 100, 1000, etc) and then at one for each test case data source defined. This way the values are consistent (until test cases are added or removed). At least this prevents me from having to wrap my data in a TestCaseData object and calling .SetName() for every case (and we have a few thousand at this point...and the possibility for many more).

private static string AddUniqueValueFromTestDataString(this string testData, string Id = "", string callerName = "")
{
return $"{callerName}:{Id}</UNIQUE>{testData}";
}


My other solution was to get a fullyqualifed name of the test method and hash that value (instead of having to pass in a callerName) but for a TestFixtureSource the NUnit TestContext hasn't been fully initialized yet.
1 user thanked DanielM for this useful post.
Remco on 2/16/2017(UTC)
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.091 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download