Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Unstable Test Generation - NUnit TestCaseSource - Any alternatives
Liaan
#1 Posted : Tuesday, April 2, 2019 12:20:14 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 4/2/2019(UTC)
Posts: 1
Location: South Africa

I recently created a test harness that would run two versions of a calculation engine and compare the output.

To keep it as simple as possible.
- We have a nuget package that represents the current production codebase
- We have the local projects with any modifications
- We use NUnits' TestCaseSource to create test cases based on either a view or a local file
- To ensure that each testcase can be run separately by using distributed processing I use Redis as a caching mechanism
- The test will then run each one of those tests looking for any changes that are not expected.
- These are integration tests using our database, but using shims to redirect and writes to the database to the test to compare the output of the two versions
- At some point when we run the tests I think NUnit becomes overwhelmed and the tests start failing with the unstable test generation error
- This only happens when I start using over 12 processes
- I can just run the tests again and they will pass

I have tried some hacks to see if that would change it
- Just creating a binary with the test case source
- Saving en enumerable in code and running that

They all still ended up failing in some way or form. Most of the time with the unstable test generation error.

My question is if anyone is aware of a similar mechanism to create data driven tests that is more stable? I am not married to Nunit at this point.
Remco
#2 Posted : Tuesday, April 2, 2019 11:32:53 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
The error you are encountering actually originates from NCrunch rather than NUnit. It's being caused by your generation returning a different set of results even though your test project has seen no changes.

Previously, this would cause your test results to become skewed and unpredictable due to the positions of tests in the result set not matching their positions in the initial discovery run.

NCrunch is designed with the assumption that the same unchanged project will always return the same set of tests every time it is asked. Where this is not the case, we now detect it and give an error rather than give malformed results.

Looking at your description of your design, I think that what you are trying to do is too complex given the constraints of the runner. Perhaps consider avoiding dynamic test generation and place the test cases inside a static test instead.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.025 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download