Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Specflow [BeforeTestRun] and [AfterTestRun] methods are being called multiple times
samholder
#1 Posted : Thursday, November 26, 2020 1:50:04 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/11/2012(UTC)
Posts: 94

Thanks: 28 times
Was thanked: 12 time(s) in 12 post(s)
Hi.

We are using Specflow 3.5 and XUnit as the test flavour to generate the tests. We are on NCrunch 4.3.0.13 currently, awaiting new licenses before we can upgrade...

I thought I understood the parallisation model used by NCrunch, but I'm seeing behaviour which conflicts with that understanding (which may be an issue in SpecFlow and not NCrunch)

Behaviour I'm seeing is this:

Start 3 tests with 2 test max parallelism.

Test 1 calls [BeforeTestRun] from process dotnet, with process Id 1000 and I initialise a static WebDriver
Test 2 calls [BeforeTestRun] from process dotnet, with process Id 2000 and I initialise a static WebDriver
Test 1 (or test 2) completes, and calls [AfterTestRun] which disposes the static webdriver.
Test 3 is started. My understanding is that NCrunch will reuse existing process to run test 3 to save creating a new test process etc
Test 3 calls [BeforeTestRun] from process dotnet, with process Id 1000 (or 2000) and things fall over because WebDriver is already initialised.

I would expect that [AfterTestRun] is not called because the test process has not been finished with, as it is reused
I would expect that [BeforeTestRun] is not called because the environment has already been initialised for this process and so static variables are set up already.

We create a single static WebDriver so when many tests are run in sequence the runtime is reduced (previously the tests were run without NCrunch, but now we want to distribute the tests across the NCrunch grid)

Is this an NCrunch issue? Or is it something that specflow needs to fix?

There was a similar issue with specflow and NUnit previously so I wonder if this is similar:

https://github.com/SpecF...OSS/SpecFlow/issues/638

If it is a specflow issue, I may be able to fix that. I'll try and look at that option over the weekend, but wanted to post this issue in case there is something on the NCRunch side which is causing this.
Remco
#2 Posted : Thursday, November 26, 2020 11:50:44 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Hi, thanks for sharing this issue.

I think the problem here may stem from the manner in which BeforeTestRun and AfterTestRun are implemented in the underlying test framework. I haven't checked this, but I'm willing to bet this is probably implemented using NUnit's SetupFixture.

SetupFixture works by being triggered when NUnit is instructed to start and finish a test run. Because each batch of tests run by NCrunch is technically a test run, this can cause the methods in this fixture to be called multiple times for a given test process.

This is not ideal behaviour, and we would change it if we could safely do so. However, we're limited in what we can change in the framework's behaviour from the outside, so we instead advise you to design your code to make problems in this area structurally impossible by using a static boolean flag inside the setup code that prevents it from being run more than once.
samholder
#3 Posted : Friday, November 27, 2020 12:24:25 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/11/2012(UTC)
Posts: 94

Thanks: 28 times
Was thanked: 12 time(s) in 12 post(s)
Thanks. To be clear we are using xunit, not nunit. I think you are probably right, and I'll look into the underlying implementation a bit more to see if there are options there.

Problem with the static flag is that then AfterTestRun can never be executed, as we can't tell when things actually finish, so we will never dispose the web driver instance. I've tested just not throwing it away and it allows me to reuse it, but then when the tests finish the chrome driver instances remain.

I suppose I can look into running the AfterTestRun code when the process is terminated instead.

Thanks for the reply.
Remco
#4 Posted : Friday, November 27, 2020 7:06:00 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
samholder;15131 wrote:
Thanks. To be clear we are using xunit, not nunit. I think you are probably right, and I'll look into the underlying implementation a bit more to see if there are options there.


Sorry for the assumption. The mechanics are the same though.. any feature of a test framework that tries to assume the start or end of a test run won't work correctly under NCrunch.

samholder;15131 wrote:

Problem with the static flag is that then AfterTestRun can never be executed, as we can't tell when things actually finish, so we will never dispose the web driver instance. I've tested just not throwing it away and it allows me to reuse it, but then when the tests finish the chrome driver instances remain.

I suppose I can look into running the AfterTestRun code when the process is terminated instead.


This is correct. There is no way to set a static flag that can safely identify when the test run is fully completed. However, it's worth considering that it's actually impossible to safely do this anyway, as:
- An unstable test can at any time corrupt the runner process or trigger a timeout (in which case the process is terminated without opportunity to run further code)
- The system running the test can be impacted by a broader issue such as an O/S/hardware level fault, forcing it to restart. It may even lose power. (probably not a big deal in your case given the browser disappears with it).
- A test runner could be re-used later on, since NCrunch is by definition a continuous runner and is not end-to-end. We could in theory modify the engine to give you a hook where you can write clean-up code, but this would be messy, unreliable and wouldn't work outside NCrunch or follow test framework mechanics.

Cleanup code within any test has always been 'best effort', and prior to continuous runners like NCrunch, this didn't matter much because the tests were usually run with someone in attendance. When we thrash the tests in the background thousands of times per day, the limitations of cleanup code become much more of an issue.

I recommend examining other design approaches. For example, you could try to track the relationship between chrome drivers/instances and test runner .exes. In your startup code, you can then identify orphaned chrome instances and clean them up.
samholder
#5 Posted : Saturday, November 28, 2020 9:45:03 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 5/11/2012(UTC)
Posts: 94

Thanks: 28 times
Was thanked: 12 time(s) in 12 post(s)
yep all good points. We already do some cleanup of older chromedriver processes for the exact reasons you outline but maybe the better approach would be to use something like the build number in the name of the chromedriver process and then have the build clean up the processes it knows that its run created.

thanks for the input. And for an awesome tool (our test run time is down to ~30% of what is was now I've switched the integration tests to run on the grid :))
Remco
#6 Posted : Saturday, November 28, 2020 10:04:13 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
samholder;15138 wrote:

thanks for the input. And for an awesome tool (our test run time is down to ~30% of what is was now I've switched the integration tests to run on the grid :))


That's an awesome result :) Keep pushing!
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.049 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download