Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Howto disable analysis task
samuelhess1
#1 Posted : Thursday, June 27, 2019 6:44:12 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 4/1/2019(UTC)
Posts: 4
Location: Germany

"NCrunch Processing Queue" almost permanently shows a Task called "Analysis". "NCrunch Timeline Export to html" confirms, that most of the time one CPU core is busy doing analysis.

My understanding of the task "analysis" is, that NCrunch runs the roslyn analyzers that come with nuget packages like "Roslynator.Analyzers" or "Microsoft.CodeAnalysis.FxCopAnalyzers".

If that understading is correct, Visual Studio is doing the exact same thing and NCrunch does not need to do it.

Question: How can I make NCrunche not doing "analyisis", but focus on fast build and test execution?

(I have a rather old computer and about 5 such analyser nuget packages installed in a solution with 35 projects.)
Remco
#2 Posted : Thursday, June 27, 2019 7:49:44 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 5,759

Thanks: 745 times
Was thanked: 956 time(s) in 911 post(s)
Hi, thanks for posting.

The analysis task is a test discovery action. It involves creating a runtime environment (i.e. launching your code using the CLR and loading the assembly), then asking the test framework for details about the tests contained in the assembly. There is no relationship between this task and roslyn analysers (it was actually introduced years before Roslyn even existed in any public form).

The performance of this task is directly dependent on the performance of the testing framework you are using. Test frameworks such as NUnit and XUnit can be quite complex during test discovery as they build an in-memory model of your tests using reflection. If you want NCrunch to be able to execute your tests, there is no way to skip this task.

Later in the queue NCrunch will use the same runtime environment to execute your tests. This means that if there were a way to prevent this task from being run, the work being done would simply move to the execution tasks instead.
samuelhess1
#3 Posted : Thursday, June 27, 2019 9:13:06 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 4/1/2019(UTC)
Posts: 4
Location: Germany

Ok, so disabling the analysis does not make any sense. :|o
That also explains, why the tests start running only after the analysis task has completed.

I actually use xUnit Facts and Theories.
I assume there is no way to accelerate the analysis task other than using a more powerfull computer or distributed processing. Is that correct?
Remco
#4 Posted : Thursday, June 27, 2019 10:07:33 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 5,759

Thanks: 745 times
Was thanked: 956 time(s) in 911 post(s)
samuelhess1;13624 wrote:

I assume there is no way to accelerate the analysis task other than using a more powerfull computer or distributed processing. Is that correct?


Distributed Processing will be of very limited help here, because we presently don't distribute the analysis task in the same way as the test tasks. Right now the analysis task will run on each node (we have plans to change this eventually).

Upgrading your hardware will usually help.

It may also be possible to redesign your test suite to reduce the cost of test discovery. I'm not really an expect on the behaviour of the internals of Xunit, but I would expect that more complex constructions such as Theories will probably require more CPU to discover by the framework. Reducing your overall number of tests will improve discovery time. If the option is available, you could also try different test frameworks to see which ones will perform better for your solution. MSTest usually gives the best performance under NCrunch because this is run under optimised emulation (rather than being directly integrated).
samuelhess1
#5 Posted : Friday, June 28, 2019 5:23:36 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 4/1/2019(UTC)
Posts: 4
Location: Germany

Thank you for your tipp!

I've restructured one big Theory with ~150 InlineDataAttributes to a normal FactAttribute test. This reduced the required time of the Analysis task to less than a second.

However, the existence of Theories are the main reason I am using xUnit instead of MSTest - so I'm going to reconsider that decision :|
samuelhess1
#6 Posted : Friday, June 28, 2019 5:51:41 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 4/1/2019(UTC)
Posts: 4
Location: Germany

I've just learned, that MSTest has the same concept as xUnit Theories:

[DataRow(0, 0)]
[DataRow(1, 1)]
[DataRow(2, 1)]
[DataRow(80, 23416728348467685)]
[DataTestMethod]
public void MyTest(int number, int result)
{
// do something
}

I've also observed, that NCrunch does not have any "Analysis Task" when using MSTest. This explains why I've never seen it during the last 8 Years of NCrunch usage, but now suddenly (since I'am playing around with xUnit) it showed up :)

Before using xUnit, I consulted https://www.ncrunch.net/support/frameworks. Maybe it is a good idea to add a hint on ncrunch runtime perforamance for the single test frameworks. This might help others no not learn it the hard way as I diD.
Remco
#7 Posted : Saturday, June 29, 2019 12:10:08 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 5,759

Thanks: 745 times
Was thanked: 956 time(s) in 911 post(s)
samuelhess1;13632 wrote:

Before using xUnit, I consulted https://www.ncrunch.net/support/frameworks. Maybe it is a good idea to add a hint on ncrunch runtime perforamance for the single test frameworks. This might help others no not learn it the hard way as I diD.


I'd like to do this, and actually have been planning for it for a while ... though it's sadly not as simple as making general comments about performance between the frameworks. Some frameworks do better at some things than others, and it usually depends largely on how the code is structured and which features of the frameworks are being used. It is also a bit of a tricky area politically, as we try to stay on good terms with the developers of these frameworks (most of which are volunteers), so publishing comparisons of the performance of their work would seem like picking sides.

Right now there is a very large piece of development work pushing its way up through the NCrunch engine to resolve issues with instrumentation performance. Following this, there'll probably be a renewed focus on the performance of test discovery and execution (which will be the new bottleneck), so I suppose many things are likely to change.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.048 seconds.