Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

NCrunch Console Test Count Inconsistency
scjones
#1 Posted : Tuesday, July 7, 2020 1:24:28 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 7/7/2020(UTC)
Posts: 3
Location: United Kingdom

I know this has come up a couple of times before, but I've hunted around the forums and internet without finding an answer or solution, so I thought I'd ask again. When using the NCrunch Console 4.3.0.13 with NUnit 3 we are seeing an extra "Fixture" unit test added to every class that has individual unit tests. Now we use the "class nesting" approach to group our unit tests, which means we have a test fixture that represents our class under test and within that mulltiple nested classes called things like "Constructor", "GetMethod", "PostMethod" etc each of which contain multiple tests relating to testing that particular area of the class under test. This is a fairly well know pattern for grouping tests together.

Unfortunately though due to the way NCrunch includes a Fixture test count this significantly increases the number of tests counted by the console and reported in Team City, and it becomes wildly different to the test count in the NCrunch runner within Visual Studio 2019. One of the main goals for putting NCrunch on TeamCity was to have an exact match between the test results/metrics in Visual Studio and those on Team City, and this count mismatch puts a spanner in those works.

It is very difficult to explain to team members why this count is different, as we don't use either the [Setup], [OneTimeSetup] or private member variables in any of our test fixtures. It's even more difficult to explain why a single test failure in Visual Studio results in 2 test failures on Team City. So can you please answer the following:

1) Why is there a test reported for each "fixture" class? How is that information captured - is it when the first test (or any test) is run within that fixture, or is it manually discovered and executed first before any other test is run?
2) If there are no [Setup] or [OneTimeSetup] methods within the class, with is the "fixture" class still reported in the count or test failures?
3) Can we please have a way to disable/filter the _Fixture_ tests displayed in Team City, as for us I don't believe we will ever have fixture-level test failures and this is really causing us an issue.

Happy to provide examples if you need.
Remco
#2 Posted : Wednesday, July 8, 2020 1:58:55 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi, thanks for posting. Answers are below:

scjones;14849 wrote:

1) Why is there a test reported for each "fixture" class? How is that information captured - is it when the first test (or any test) is run within that fixture, or is it manually discovered and executed first before any other test is run?


This is due to the abstractions that exist in the reporting structure of NUnit (and other frameworks too).

When we integrate with a framework, we essentially interrogate its code for information on the test assembly, so that we can build a list of tests. The data the framework provides to us conforms with a basic hierarchical structure that contains no specific detail on the attributes being used to specify the tests, just the basic form and the various names involved. Under NUnit, all tests must reside under a fixture, and the fixture itself is an important object that can contain the same sorts of data as any other test within the suite (for example, a pass/fail result, code coverage data and trace output).

As with tests, we build fixtures in response to the structural data provided by NUnit during discovery. We then populate these fixtures with results when the tests are run. The NUnit results output contains separate elements for these 'fixture tests' including pass/fail results, exception messages, etc.

scjones;14849 wrote:

2) If there are no [Setup] or [OneTimeSetup] methods within the class, with is the "fixture" class still reported in the count or test failures?


This is due to the reporting limitations of Team City. At the time we implemented our integration with TC, we didn't have a way to report the fixtures separately from the tests (TC didn't seem to have a concept capturing them that we could use). Because the fixture results are just as important as the test results and we couldn't afford for them to be excluded from the run, we simply report them as virtual tests instead. TC therefore adds them to its internal count which is then reported outside of our control.

As earlier described, we don't have a safe way to assume whether a fixture is likely to have a OneTimeSetup without sidestepping the abstractions in place within NUnit (which would be terrible for forwards compatibility). Thus we cannot safely exclude fixtures from the result set without the risk of swallowing relevant results.

scjones;14849 wrote:

3) Can we please have a way to disable/filter the _Fixture_ tests displayed in Team City, as for us I don't believe we will ever have fixture-level test failures and this is really causing us an issue.


This is probably a reasonable thing to request, and you are welcome to make a request for it on uservoice if you like.

However, you should be aware that there may be unintended consequences to doing this. Removing the fixtures from the TC result set can cause you to lose relevant error information if one of your fixtures fails inside a OneTimeSetup. Currently under NUnit the child tests will fail sympathetically if their parent fixture fails, but they do not contain the source exception details.
scjones
#3 Posted : Wednesday, July 8, 2020 8:31:15 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 7/7/2020(UTC)
Posts: 3
Location: United Kingdom

Many thanks for the quick response, I now understand the reasoning and appreciate the detailed explanation.

However I do need to emphasise how much I think we need a workaround for this. We have been using the built-in Team City NUnit runner for many years, and developers are accustomed to the lists of tests/test count in Team City matching those in their development environment. I've been pushing to get all developers off the ReSharper test runner and switched over to NCrunch, and have the budget to buy an additional 50 NCrunch licenses required to do so, but a primary justification I've been using is that the build system test results & code coverage would precisely match what they have when running tests locally and right now it doesn't.

I can also understand your point about potential loss of diagnostic resolution - I've just run the same tests in Team City through the standard NUnit runner and NCrunch and can clearly see the increased diagnostic resolution under NCrunch if an exception is thrown in an [OneTimeSetup] method (i.e. the Fixture test giving the exact error file location and exception details) compared to that under the NUnit runner which doesn't report that at all.

So what I think we need is an option in the Console tool to exclude all PASSING Fixture tests from both the NCrunch HTML report and the Team City ## stdout messages. That way if a fixture test fails it will be reported with the required diagnostic resolution, whereas if it passes then the test count is not affected and everything would tie up wonderfully.

Do you think this is functionally possible? If so I will raise the request in uservoice as you recommended.
Remco
#4 Posted : Wednesday, July 8, 2020 8:45:18 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
scjones;14855 wrote:

Do you think this is functionally possible? If so I will raise the request in uservoice as you recommended.


From what I understand about the current implementation, I feel this is a reasonable request and it could be implemented as a custom filter in a similar way as the 'Tests to execute automatically' setting.

From a product management side, we do need to be careful to avoid implementing a new configuration setting for every customer (as we already have more than enough to make the product difficult to learn). If you can word the request on uservoice according to how you would want the problem to be solved, and have a good number of potential users in your organisation vote for it, I can promise it will receive a proportional level of attention from the development side.
scjones
#5 Posted : Thursday, July 9, 2020 10:16:23 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 7/7/2020(UTC)
Posts: 3
Location: United Kingdom

Again, many thanks for the quick response. I have created a user voice and am working on getting all my developers to vote on it. Unfortunately many have already reported that they have clicked the "vote" button and used their work email address, but are not seeing the vote go up? Is it limited by rate of votes/source IP address/domain name? As I can easily get 50 votes on it today but it seems the voting system might not support businesses...
Remco
#6 Posted : Thursday, July 9, 2020 12:42:14 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
scjones;14857 wrote:
Again, many thanks for the quick response. I have created a user voice and am working on getting all my developers to vote on it. Unfortunately many have already reported that they have clicked the "vote" button and used their work email address, but are not seeing the vote go up? Is it limited by rate of votes/source IP address/domain name? As I can easily get 50 votes on it today but it seems the voting system might not support businesses...


Not as far as I know, but the system is managed by a third party (uservoice) and it's possible they have limiters of some sort to prevent spam etc. Just do your best :)
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.079 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download