Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Large solution - pending tests do not execute
rathga
#1 Posted : Monday, November 28, 2022 12:36:07 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 9/25/2020(UTC)
Posts: 4
Location: United Kingdom

Thanks: 2 times
Hi there

We have been experimenting with amalgamating 20+ smaller solutions into one big solution with c. 750 projects given the advent of VS 2022.

It has worked fine so far but we are having trouble getting Ncrunch to execute tests. The tests executed fine in the previous individual, smaller solution files.

Ncrunch sucessfully builds everything, and executes some of the tests (about 3k out of 16k) - but then stops. The tests remain as pending tasks in the processing queue, and there are no active tasks.

I can remove all (13k+) pending tests and try to queue just a handful of them manually, but the situation is the same: the tasks are pending but never execute.

I can successfully re-queue and re-run the tests that previously executed ok.

How I debug this and find out why the tests are not running...?

Thanks


Remco
#2 Posted : Monday, November 28, 2022 10:52:16 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi, thanks for sharing this problem.

We regularly test the engine with significantly more tests than this, so my expectation is that there is a structural reason for these tests not being run (rather than the raw quantity of them). 750 projects is quite an impressive number though.

My suspicion is that the tests have a dependency that is not being met. Open the Processing Queue Window and examine the list of tasks.

Have any of the build tasks failed? Have the build tasks all completed?

Right click the column header and make sure to turn on the 'Required Capabilities' column. Do any of the test or build tasks have a required capability that doesn't exist in the machine's list of configured capabilities?
1 user thanked Remco for this useful post.
rathga on 11/29/2022(UTC)
rathga
#3 Posted : Tuesday, November 29, 2022 8:54:18 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 9/25/2020(UTC)
Posts: 4
Location: United Kingdom

Thanks: 2 times
Thank you for the reply.

None of the build tasks have failed, but we do have two .netstandard 'helper' projects which reference NUnit for assertions but contain no tests (i.e. they are not actually test projects) that come up as an Analysis failure. But I think this should be ok (?) Or will NCrunch refuse to run tests that rely on these libraries?

The 'Required Capabilities' in the pending task list is all blank.

Anything else to check?

rathga
#4 Posted : Tuesday, November 29, 2022 9:22:28 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 9/25/2020(UTC)
Posts: 4
Location: United Kingdom

Thanks: 2 times
Bingo! Looks like disabling NUnit 3 test framework for these two utility projects (as suggested right at the end of https://www.ncrunch.net/documentation/considerations-and-constraints_netstandard-test-projects) makes all the tests now run.

Not sure that should be the default behaviour? Or maybe worth adding to the error message explaining this might happen in the case of utility projects? But something for you guys to figure out I guess :)

Thanks
Remco
#5 Posted : Tuesday, November 29, 2022 10:01:22 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Nice work, that will fix it :)

The problem here isn't that these are test utility projects. It's perfectly fine to have projects that reference test frameworks but don't declare any tests. The problem here is that these projects are targeting a platform that cannot be directly instantiated at runtime (netstandard). We have no way of knowing whether a project contains tests without first loading it into a runtime environment, which we can't do for netstandard projects ... hence the failure. The engine should have given you a meaningful error about this, so I think you encountered behaviour as designed here. Great to hear you are up and running now.
rathga
#6 Posted : Wednesday, November 30, 2022 4:05:20 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 9/25/2020(UTC)
Posts: 4
Location: United Kingdom

Thanks: 2 times
Sure that's all understood and thanks

I got a meaningful error for these projects in the regard you specify, and that is fine. But what I think is NOT expected behaviour is the fact that client test projects for these two library projects did not execute. I'm not sure there is a good reason for that? Whilst tests cannot be located in the projects themselves, they should still be built in .netstandard and used as a dependency in the normal way for other projects, and not block the running of tests in them. Unless I am missing something... ?
Remco
#7 Posted : Wednesday, November 30, 2022 11:17:13 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
rathga;16358 wrote:

I got a meaningful error for these projects in the regard you specify, and that is fine. But what I think is NOT expected behaviour is the fact that client test projects for these two library projects did not execute. I'm not sure there is a good reason for that? Whilst tests cannot be located in the projects themselves, they should still be built in .netstandard and used as a dependency in the normal way for other projects, and not block the running of tests in them. Unless I am missing something... ?


This happens due to the structural behaviour of the engine. The build and analysis steps for a project are considered to be a dependency of all other projects that depend on it. Technically, the analysis step does not need to be a dependency (as it isn't required for building a test environment for referencing projects). However, in terms of sensible operation, we do still need to perform all analysis steps successfully before we can properly prioritise tests in the queue. Sensible prioritisation is VERY important, especially when big tests are involved. We don't want the engine kicking off big test runs when the overall list of tests is still a big unknown. For this reason, a failure encountered during analysis is given the same level of status as a build error.

It's possible to improve the above. We could implement a range of hacks and implicit logic to try and allow the engine to still process the rest of the work around analysis failures, but it really isn't worth it. There is a serious error being reported that requires attention. To properly use the product without alarm bells going off, you need to fix the error. Allowing the product to work around the error gives very little value when a decision about the netstandard project needs to be made anyway.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.059 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download