Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Performance drop when upgrading from V4 to V5.9
coverHarman
#1 Posted : Tuesday, October 22, 2024 12:33:49 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 10/22/2024(UTC)
Posts: 6
Location: United Kingdom

Thanks: 1 times
We’ve recently upgraded from V4.18 to V5.9 and have noticed that 600 tests which used to take ~15mins now take ~1hr.

We are using Specflow to define our tests / Playwright to perform them and I have made the following changes to the NCrunch performance settings:
Max number of processing threads = 4
Max number of test runner processes to pool = 4

I’ve tried to compare the tests running on my own machine to a colleague who still has V4.18 and can see that there are some differences when running the tests:
  • Actions in the browser seem a lot slower. In the older version the button clicks and data entry seem to happen at a much higher cadence.
  • Browser windows close more regularly in the new version. We have a method that checks if tests are running in NCrunch by checking the ‘NCrunch’ environment variable to avoid this happening. Prior to upgrading this was honoured and increased test runtime a lot. This variable is still set the same and the result from this check is still what we’d expect but doesn't help the performance any longer.

In an attempt to solve this issue I've removed the NCrunch cache from one of the offending solutions, but there was no obvious improvement afterwards. I've also compared the processing queue to a colleagues and can't see any obvious difference between the way our tests run.

Is there anything I can do to resolve this issue in the config or anywhere else?
Remco
#2 Posted : Tuesday, October 22, 2024 10:48:12 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Hi, thanks for sharing this issue.

My first thought is to check your RDI settings. RDI is a new feature we introduced in V5 which gives a wide range of new analysis features for code under test. The drawback is that it can massively increase resource demands.

RDI won't enable itself automatically, but it's possible someone on your team turned it on and the config change was committed, so the change wasn't expected.
coverHarman
#3 Posted : Thursday, October 24, 2024 10:11:59 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 10/22/2024(UTC)
Posts: 6
Location: United Kingdom

Thanks: 1 times
Hi Remco,

Thanks for getting back to me. I've disabled RDI but the performance is still the same unfortunately. This is a great feature though!

I have also tried rolling back to v4.18 to make sure it's not something else that's changed and, while I do get a few transient failures (which I believe is more down to some errant tests), the runtime is about 15 minutes for 800 tests so this definitely seems to be something in the recent version. I'll try and step up through versions this afternoon to see if there's one in particular that starts going more slowly.

Are there any config changes stored on the local machine outside of the VS solution?
Remco
#4 Posted : Thursday, October 24, 2024 11:58:50 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Thanks for these extra details. This issue has me a bit perplexed. It would be really great if you're able to pinpoint exactly which version update started causing this problem. Off the top of my head, I can't think of anything we've recently changed in the product that could cause this other than RDI (which when disabled will have no impact).

Under you user roaming profile there is a global config file (C:\Users\USER\AppData\Roaming\NCrunch). Renaming or deleting this file with the IDE closed will cause your local settings to be reset. This is the only place that the NCrunch client will store settings outside of the solution and project config files.
coverHarman
#5 Posted : Thursday, October 24, 2024 12:12:58 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 10/22/2024(UTC)
Posts: 6
Location: United Kingdom

Thanks: 1 times
Thanks Remco, I'll have a dig around versions and see if I can pinpoint anything useful. I'm off tomorrow so may be next week I come back.
coverHarman
#6 Posted : Thursday, October 24, 2024 2:11:18 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 10/22/2024(UTC)
Posts: 6
Location: United Kingdom

Thanks: 1 times
Hi Remco,

I've been through the following for 800 tests:

4.18 - ~15mins
5.00 - ~ 1hr 10mins
5.09 - ~ 1hr 10mins
5.10 - ~15mins

I'm not sure if anything drastic changed in this latest release or whether me installing and uninstalling different versions has solved this, but everything looks good now! Including that jittery UI bug :)
Remco
#7 Posted : Thursday, October 24, 2024 11:00:53 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Great to hear! I have a feeling you might see this issue again, but I suspect it may be due to execution sequence or batching, in which case re-running the whole test suite will likely resolve it. Another option is to clear out your NCrunch cache files, which would reset the engine's knowledge of how long tests take to run.

If you have tests with very long setup times in their environment (for example, a really expensive static constructor), it's possible to see this sort of thing happen. It may be worth checking the NCrunch hot spots window to see if there is a common pattern around where much of the time in the test run is spent... it may be something that's easy to fix.

I'll add that I don't think this is actually related to the version of NCrunch you're using, save for fact that sometimes switching version will cause the engine to rebuild the cache file. The V5 releases have been very focused on RDI, Rider support, and DPI fixes. None of these can have the kind of performance impact you're experiencing.
coverHarman
#8 Posted : Tuesday, December 10, 2024 3:38:56 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 10/22/2024(UTC)
Posts: 6
Location: United Kingdom

Thanks: 1 times
Unfortunately this speed up was shortlived and I've been struggling with slowness again. I've just been poking around today and downgraded to 5.2 which temporarily sped things up but after a few test runs everything grinds to a halt again.

Is there anything else you could think of that might lead to this?

I'm running identical software to my colleagues who are running the same tests in 15 minutes but for me yesterday took 3 hours. Getting kind of desperate for a solution or at least some sort of clue as to where the issue might be.
Remco
#9 Posted : Wednesday, December 11, 2024 12:33:02 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
My suspicion here is that this problem is caused by tests with inconsistent execution times.

If the time taken for tests to execute isn't consistent, NCrunch won't know how to optimise the processing queue correctly. This can result in the overall execution time to be widely inconsistent, with some runs finishing very quickly and others taking a very long time.

The root cause here is batching. Tests that take a very small time to execute are batched together and run using a single task. If you look at the processing queue window, you'll likely see that there are few tasks with lots of tests, then a lot of tasks with very few tests. Normally this is efficient because the time taken to prepare an environment and invoke the test framework is significant, so we don't want to do it every time we run each individual test. Where this goes wrong is if NCrunch thinks a test has a very short time to run, when actually the execution time is very long. This can result in the 'fast test' batches taking a long time to run, and the parallel execution doesn't work efficiently.

The biggest cause of this are things like load-on-demand patterns that cache/index data or do groundwork for other services in the process, but really it could be anything that could cause a test to oscillate between long and short run times depending on its execution sequence relative to other tests.

The easiest way to find these problems is using the hot spots window after you've just had a really long test cycle. This should help to identify the areas of code that took a long time to execute. If you trace how this code is being used, you may be able to find a way to resolve the issue.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.082 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download