Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Parallel execution not spreading tests across available processes
applieddev
#1 Posted : Tuesday, April 14, 2020 3:07:04 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 11/13/2019(UTC)
Posts: 8
Location: United Kingdom

We're getting an issue with our SpecFlow 3.1 tests running in parallel across multiple NCrunch nodes with many threads available

the execution is held up by 1 or 2 processes that run multiple tests, from multiple features / folders, with different tags
this increasing our test time from 20 mins to 60+ mins

i've scaled down the timeline by a factor of 40 to fit into a screenshot
Timeline

there's 1 process taking over 50 mins to run 8 tests from 4 features with different tags
the longest test it runs is under 14 mins

other processes have finished after 10 mins and lay idle.

i noticed that the 2 really long processes show as a single block for multiple tests in multiple features in the timeline
while the other processes divide into multiple blocks, running only 1 test per block in the timeline

is there anything that would cause NCrunch to group those tests into a single block that cant be spread out across multiple processes?
Remco
#2 Posted : Wednesday, April 15, 2020 12:15:34 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,201

Thanks: 807 times
Was thanked: 1065 time(s) in 1012 post(s)
Hi, thanks for posting.

Is it possible that these tests have inconsistent execution times, depending upon their execution sequence?

If the tests take a longer time to execute than the engine expects (for example, they may depend on some static state to be initialised at the start of their run), this could cause them to be lumped together as the engine thinks they are fast executing when they're actually not.

The easiest way to check this is to look at the test tasks inside your Processing Queue. Make sure you have both the 'Expected Processing Time' and 'Actual Processing Time' columns turned on. If these differ significantly from each other, you're likely to see poor optimisation of your test pipeline like this.

I note by the test names that these are all using selenium. Do they have a shared dependency on the UI, or anything in the system that might cause them to run through a bottleneck?

You could try marking the test fixtures with NCrunch.Framework.IsolatedAttribute if you want to force them into their own group and their own process.

applieddev
#3 Posted : Wednesday, April 15, 2020 9:26:42 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 11/13/2019(UTC)
Posts: 8
Location: United Kingdom

thanks Remco,

their execution times are quite consistent and its not always the same tests that get bundled together
all the other tests are also Selenium, so they all have the same dependencies, but just a few get bundled together.

can i see Expected/Actual Processing Time on NCrunch nodes run via TeamCity or in the TeamCity NCrunch Results?
if so, where can i do that?

i dont currently have a way to get IsolatedAttribute our version of SpecFlow (v3.1)
is it possible to get tests to run isolated via any other means? e.g. via Command Line?
Remco
#4 Posted : Thursday, April 16, 2020 12:10:21 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,201

Thanks: 807 times
Was thanked: 1065 time(s) in 1012 post(s)
applieddev;14610 wrote:

can i see Expected/Actual Processing Time on NCrunch nodes run via TeamCity or in the TeamCity NCrunch Results?
if so, where can i do that?


I've just checked through the reports we give from the console tool and confirmed that we don't provide the expected processing time for the tasks, unfortunately.
However, the behaviour for your pipeline should be very similar when using the VS client. I would recommend checking the results from a normal run through in your VS client to see if there is an issue.

applieddev;14610 wrote:

i dont currently have a way to get IsolatedAttribute our version of SpecFlow (v3.1)
is it possible to get tests to run isolated via any other means? e.g. via Command Line?


It's possible to apply this attribute at assembly level. When you do this, all fixtures in the test project are marked as isolated and the engine will always separate them into their own tasks.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.222 seconds.