Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Framework Utilisation Types missing Xunit v2+
Kirlu
#1 Posted : Thursday, June 8, 2017 7:55:44 AM(UTC)
Rank: Member

Groups: Registered
Joined: 6/8/2017(UTC)
Posts: 10
Location: Democratic Republic of the Congo

Was thanked: 2 time(s) in 2 post(s)
Hi,

Framework utilisation type for Xunit v2+ is missing from Test Frameworks configuration.
It seems that it now uses "UseStaticAnalysis".
Remco
#2 Posted : Thursday, June 8, 2017 10:25:02 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Hi, thanks for posting.

NCrunch v3.9 included significant changes to the Xunit integration. I'll post the details here from the release notes:

Significantly restructured NCrunch's integration with Xunit v2. Previously, NCrunch would include the runtime
binaries for xunit in its own installation and introduce binding redirections to force user projects to always
use NCrunch's version of Xunit. This approach had several limitations, particularly around its inability to
provide proper compatibility across different versions of Xunit. Understandably, it also wasn't very popular
with the Xunit dev team.

Under the new approach, NCrunch is integrated only with Xunit's lightweight utility libraries, and relies on
Xunit's dynamic loading of runtime assemblies so that the version of Xunit being used for execution is always
the same version being referenced by the user test project. This has the following effects on NCrunch's
Xunit integration:
- Better cross version Xunit support
- Better forwards compatibility with future releases of Xunit
- Better handling of rare Xunit edge cases that may have previously failed under NCrunch, particularly in
heavily customised environments
- Improved interaction with other toolsets that use specific versions of Xunit (i.e. XBehave)

There are some trade-offs here:
- Older non RTM versions of Xunit will now likely fail to work with NCrunch (i.e. alpha/beta releases). Make
sure you upgrade if you're running a dev build of Xunit.
- Issues that are not specific to v2.1 of Xunit now have the potential to surface when running tests under
NCrunch. For example, Xunit v2.2 does not support .NET 4.5, so anyone running this version of .NET may experience
problems unless they downgrade their Xunit to v2.1. Locking NCrunch to a single version of Xunit had some
advantages in terms of very thorough testing for this version, where we now have the potential for new problems
to be discovered due to limited real-world testing of NCrunch on other Xunit versions. If you're concerned about
this, I recommend making sure you use Xunit v2.1 for the time being.
- Support for the Static Analysis framework utilisation type has been removed for Xunit. Unfortunately it just
wasn't feasible to make static analysis work with the new integration design. I doubt anyone will notice this
as static analysis has had problems for a long time under Xunit and was always disabled by default.



..........

Is there a reason you need the static analysis? At the moment I'm not aware of any use case that would work with static analysis and not dynamic analysis.
Kirlu
#3 Posted : Thursday, June 8, 2017 10:57:10 AM(UTC)
Rank: Member

Groups: Registered
Joined: 6/8/2017(UTC)
Posts: 10
Location: Democratic Republic of the Congo

Was thanked: 2 time(s) in 2 post(s)
Hi,

It seems that it uses the setting "UseStaticAnalysis" now which makes Ncrunch unable to run many Xunit 2 theories that all are using MemberData.
We get the same error in previous version of Ncrunch if we change "Framework utilisation type for Xunit v2+" to "UseStaticAnalysis".
Remco
#4 Posted : Thursday, June 8, 2017 11:27:50 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
The static analysis code itself has been physically removed. You should be able to confirm this by checking for the existence of the 'Analyse assembly' tasks in the NCrunch Processing Queue. The 'Analyse assembly' task is used to execute dynamic analysis.

Are you able to share with me the test declaration for the code that isn't executing properly under NCrunch? And the error you're receiving?
Kirlu
#5 Posted : Thursday, June 8, 2017 11:45:10 AM(UTC)
Rank: Member

Groups: Registered
Joined: 6/8/2017(UTC)
Posts: 10
Location: Democratic Republic of the Congo

Was thanked: 2 time(s) in 2 post(s)
Hi,

The following test code gives the error in NCrunch:
"This test was not executed during a planned execution run. Ensure your test project is stable and does not contain issues in initialisation/teardown fixtures."

The code will not run the unit test, but will mark it as failed.

Code:
        
private static readonly Fixture Fixture = new Fixture();

public static IEnumerable<object[]> SomeTestData()
{
	yield return new object[] { Fixture.Create<long>() };
	yield return new object[] { Fixture.Create<int>() };
	yield return new object[] { Fixture.Create<string>() };
}

[Theory, MemberData(nameof(SomeTestData))]
public void Something(object value)
{
}


For reference I am using AutoFixture for creating test data.
Remco
#6 Posted : Friday, June 9, 2017 12:07:50 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Thanks for sharing the code sample.

This is being caused by a compatibility problem between NCrunch and AutoFixture. The code you're using above uses AutoFixture to generate random test data that is used in test parameters and therefore the physical name of each of the test cases is completely random.

This creates a structural problem for NCrunch, because there is no way for it to correlate test results data between different instances of the test. Each time it invokes Xunit to construct the fixture, the names of the tests are different, so there is no way to know how to find the targeted tests or how to extract their results.

There is no way this can be solved in NCrunch. The only way to make this work is to change the structure of the tests. I recommend avoiding the use of random data in test parameters. If you need to use random elements, it's better to do this inside the tests themselves so there is no random behaviour during test discovery.
robertengdahl
#7 Posted : Friday, June 9, 2017 6:34:03 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 6/9/2017(UTC)
Posts: 1
Location: Denmark

Ncrunch 3.8.0.3 handles this test just fine.

Ncrunch 3.8.0.3 with "Fraemwork utilization type for Xunit v2+" set to UseDynamicAnalysis

The output of SomeTestData is not random (unless you evaluate it repeatedly). Fixture.Create<int>() will for example return 1, the first time it is invoked, 2 the second time and so forth.
Remco
#8 Posted : Friday, June 9, 2017 11:24:38 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
v3.8 of NCrunch contained a defect where it was not correctly calling .ToString() on the contents of array parameters being used for test cases. This meant that the parameters you were passing in were actually being converted to a different value, which would have had the unintentional side-effect of making your code work. Because this defect has now been fixed, the tests themselves no longer have distinct names.

The randomness may be seeded per process, but for performance reasons, NCrunch will re-use the test process between test runs. Sorry, but the only way to resolve this problem in v3.9 of NCrunch and above is to redesign your tests.
Kirlu
#9 Posted : Tuesday, June 13, 2017 7:29:24 AM(UTC)
Rank: Member

Groups: Registered
Joined: 6/8/2017(UTC)
Posts: 10
Location: Democratic Republic of the Congo

Was thanked: 2 time(s) in 2 post(s)
Hi,

Am I to understand this as a recommendation against unstable test parameters in general?

If so, will you continue to support InlineAutoData and AutoData, or will they also be unsupported in some future version?


1 user thanked Kirlu for this useful post.
robert-j-engdahl on 6/13/2017(UTC)
Remco
#10 Posted : Tuesday, June 13, 2017 8:54:18 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Kirlu;10594 wrote:

Am I to understand this as a recommendation against unstable test parameters in general?


That's correct. Because of the way these parameters work, there is no way they can be feasibly supported in NCrunch.

Kirlu;10594 wrote:

If so, will you continue to support InlineAutoData and AutoData, or will they also be unsupported in some future version?


Unfortunately not. The lack of support here is due to a technical limitation. There is simply no way to consistently identify a test that changes its shape every time it's loaded. Technically, these attributes were never really supported in prior releases, but in some cases they worked accidentally because of a test name derivation issue. Now that this issue has been fixed, I don't expect to see this kind of autogeneration working in any future release of NCrunch.
1 user thanked Remco for this useful post.
robert-j-engdahl on 6/13/2017(UTC)
robert-j-engdahl
#11 Posted : Tuesday, June 13, 2017 9:13:35 AM(UTC)
Rank: Member

Groups: Registered
Joined: 1/26/2017(UTC)
Posts: 13
Location: Denmark

Thanks: 4 times
The download count of the nuget AutoFixture.Xunit is significant. Perhaps it would be fair to detect uses of AutoData and AutoInlineData and give a friendly warning so that others will know that these guys are obsolete.
Remco
#12 Posted : Tuesday, June 13, 2017 9:27:09 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
robert-j-engdahl;10596 wrote:
The download count of the nuget AutoFixture.Xunit is significant. Perhaps it would be fair to detect uses of AutoData and AutoInlineData and give a friendly warning so that others will know that these guys are obsolete.


Agreed 100%. I'm actually in the process right now of writing up a documentation page to try and explain the issue in extensive detail. This specific problem is part of a broader category of issue (test uniqueness) that is a very common cause of confusion and frustration under NCrunch. I think it would help everyone a great deal if it was properly explained and the warning messages were meaningful.
1 user thanked Remco for this useful post.
robert-j-engdahl on 6/13/2017(UTC)
Remco
#13 Posted : Tuesday, June 13, 2017 11:55:39 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
I've updated the documentation, adding a new page giving a detailed description of why this is a problem for NCrunch.

I've also had a brainwave and done a quick spike on an approach that I hoped might implement a partial workaround for the AutoFixture problem. The theory was that NCrunch could maintain an in-memory cache of discovered xunit tests, and always refer to this cache if a targeted test had been previously discovered inside the same process. As long as the random parameter data was seeded the same way between each process, this would allow NCrunch to target tests for execution even if unstable parameter data existed. Unfortunately, it didn't work. It seems that the latest version of AutoFixture seeds the random parameter data with new values on every discovery action, regardless of whether it's a new process or not.

For sanity checking, I tried comparing my results with the Xunit VSTest runner, and noticed that this also blew up when using AutoFixture .. likely for the same reasons.

Regardless of whether I could get the above to work, there would still be the problem of sequencing. If the random data was seeded consistently, introducing new tests in the assembly would change the generation sequence of the random parameters, resulting in different values. You could add one new test to your assembly, then find that all result and code coverage data for the rest of your suite was suddenly dropped and a whole bunch of different new tests were discovered with different parameter data.

In conclusion, randomness doesn't work. Please avoid it.
Remco
#14 Posted : Friday, June 16, 2017 3:41:19 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Given the popularity of this library and that a few things didn't quite add up right in my head, I've taken a closer look at why the AutoFixture experience in NCrunch v3.8 is so much better than v3.9. My initial ideas that this was caused by the parameterisation fix are incorrect. The reason for the difference in experience is actually much deeper and more complicated. I think that it's reasonable to question why under v3.9 it isn't possible to have the same experience as v3.8, so I'll go into some detail here to explain what has changed between these two versions of NCrunch and why it just isn't feasible to continue to work with AutoFixture in the same manner as v3.8.

Prior to v3.9, NCrunch had a less conventional approach to its integration with Xunit. This approach involved statically tying into the runtime Xunit libraries (Xunit.core, Xunit.execution, etc) and packaging these with the NCrunch installation, then forcing the user to work only with this packaged version of Xunit using binding redirections. This enabled tests to be serialized from the process in which they are discovered, then transferred into the runtime environment and directly reconstructed into xunit internal types and force-fed into the xunit runner for execution. A side effect of this approach was that tests could be executed exactly in the form that they were first discovered by NCrunch, which for AutoFixture is a big deal, since every time an AutoFixture test is discovered it has a whole new signature and is therefore an entirely different test. Even with this approach, AutoFixture still had significant problems under NCrunch v3.8. Every time the tests were discovered (usually after code was compiled), they would have completely different signatures, so all test results and coverage were discarded and new tests were created. Probably if test execution times were low enough, this would have been acceptable to many people, but it certainly wasn't intentional or 'correct' behaviour.

Under v3.9, NCrunch scrapped the dependency on the runtime Xunit libraries and instead tied into Xunit.runner.utility, which is the proper supported API for Xunit test runners. Because it's no longer possible for NCrunch to instantiate core Xunit types (like the test cases themselves), serializing and transferring tests directly from the discovery environment is no longer an option. This means that the tests need to be rediscovered inside the runtime environment, giving us the AutoFixture signature mismatch problem.

The NCrunch v3.9 release notes document in some detail the changes made to the Xunit integration along with the pros/cons of these changes (sadly it wasn't all positive). The major issue with the old approach was that it wasn't cross compatible with multiple versions of Xunit, so Xunit v2.2 became a complication that couldn't be handled.

So basically, what we're seeing here is a fundamental conflict between maintaining support for AutoFixture v.s. maintaining support for everything else under Xunit. Considering that the old handling for AutoFixture had serious problems already and given a full understanding of the situation, there really is only one way this could go. I'm not giving up on the idea that AutoFixture itself could be fixed in a way that would allow it to work with the new integration, but the latest version as of this date is incompatible with NCrunch and any other runner that performs selective execution of tests.
robert-j-engdahl
#15 Posted : Monday, July 24, 2017 9:31:35 PM(UTC)
Rank: Member

Groups: Registered
Joined: 1/26/2017(UTC)
Posts: 13
Location: Denmark

Thanks: 4 times
Things being as they are, perhaps the error message could list the location of the violating test code? Right now I get a warning pr assembly.

On a side note: I think it is reasonable that if performance can be increased at the cost of random test parameters, that is acceptable. We don't use randomness in a mathematical sense anyway. If something should work for any DateTime (or int, or decimal and so on) we just have AutoFixture give us one, so that fake implementations won't make the test pass.
zvirja
#16 Posted : Wednesday, August 23, 2017 6:24:49 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 8/23/2017(UTC)
Posts: 3
Location: Ukraine

Thanks: 1 times
Hi guys,

I'm one of the AutoFixture maintainers and I have an intention to solve the compatibility issue somehow. It's quite awful that our integration is bad and people need to look for the tradeoff.

I've created a discussion on GitHub: https://github.com/AutoFixture/AutoFixture/issues/805. May somebody from the developer team jump in there and describe why that is not possible to solve the issue? Currently, it looks more like NCrunch/XUnit issue, as we explicitly disabled discovery for our attributes and xUnit API supports that. I would like to probably organize 3-way discussion with NCrunch & AutoFixture & xUnit and find a way to solve the problem for all.

Please reply on the GitHub and let's continue our conversation there.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.108 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download