Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Tests not distributed evenly across grid nodes
samholder
#1 Posted : Thursday, January 30, 2020 3:26:43 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/11/2012(UTC)
Posts: 94

Thanks: 28 times
Was thanked: 12 time(s) in 12 post(s)
Hi,

I'd like to try and understand some strange behaviour we see occasionally. We are running some UI tests that take a while (some minutes each) and have 5 nodes in the pool of available grid nodes. In this one test case there were 6 tests, but despite all nodes doing nothing, the distribution was 1/1/4 on the nodes, obviously meaning that the tests took a lot longer then they might have.

there is an image of the NCrunch timeline here: https://drive.google.com...BbAk7V59ZFbaffuJpPBkklP

Is there a reason this happens? is there anything we can do to make the tests be more evenly distributed?

Thanks

Sam
samholder
#2 Posted : Monday, March 2, 2020 4:47:00 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/11/2012(UTC)
Posts: 94

Thanks: 28 times
Was thanked: 12 time(s) in 12 post(s)
Any ideas on this. We have updated to latest builds and are still seeing this behaviour.

The causes a problem in some long running smoke/integration tests we have as we have tried to pick tests such that they take about 15 mins to run when evenly distributed, but this means that sometimes they can take 40 mins which can hold up our dev pipeline.

Even just knowing if there is anything we can do to help diagnose the issue?
Remco
#3 Posted : Monday, March 2, 2020 11:27:28 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Thanks for posting. Looking at the timeline report, my best guess is that this is being caused by the batching of tests. In this instance, we have 5 nodes but only 3 test tasks, so two nodes are left without a task.

Do these tests have any long running code in a fixture setup routine, or are they making use of AtomicAttribute? Which test framework are you using? We need to identify why the engine is choosing to group these long running tests together in the same task.
1 user thanked Remco for this useful post.
samholder on 3/3/2020(UTC)
samholder
#4 Posted : Tuesday, March 3, 2020 10:22:40 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/11/2012(UTC)
Posts: 94

Thanks: 28 times
Was thanked: 12 time(s) in 12 post(s)
thanks for the reply.

The tests don't use the AtomicAttribute. We are using Specflow which is using MSTest as the testing framework. The [BeforeTestSetup] methods generally do some process killing to make sure any process hanging around from a previous test run is killed before we start. they also do a bit of config file manipulation for the application we are going to start. In the [BeforeScenario] methods we again kill some processes, if they are alive, start video recording of the test and do a bit of setup. None of it looks 'long' particularly to me, though I suppose sometimes it might take a while for processes to die. I'm not sure what you define as 'long running code' here. We could instrument this time and log it, if that would help diagnose?

Like I said this doesn't happen all the time, just occasionally. I have more examples, if that would help and could potentially provide build logs if required
Remco
#5 Posted : Tuesday, March 3, 2020 11:01:27 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
The key variable here is the time the overall fixture takes to run excluding the time spent executing each child test. If the execution time of the fixture is too high, all of its child tests will be grouped together in the same test execution task so that the runner doesn't need to make the expensive fixture setup/teardown code run multiple times during a test run.

You can view the execution time of the fixture in the tests window. You might need to turn on the right column to see this separate from the 'total' fixture execution time.

It's possible to force the tests to run in separate tasks by adorning them with NCrunch's attributes, such as 'InclusivelyUsesAttribute'. If you tell the engine that each test makes inclusive use of a different resource, the engine will place each test in a separate task. Note this means that the fixture setup/teardown code will need to run multiple times. In this situation, the tests will be spread much more efficiently through the nodes, but the overall CPU required to run the test suite will likely be quite a bit higher.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.036 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download