Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

OneTimeTearDown is called more than once
berkeleybross
#1 Posted : Wednesday, December 27, 2017 10:45:40 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 7/28/2015(UTC)
Posts: 6
Location: United Kingdom

Thanks: 4 times
This seems similar to issue "OneTimeSetUp is called more than once per assembly" (http://forum.ncrunch.net/yaf_postst2248_OneTimeSetUp-is-called-more-than-once-per-assembly.aspx) which has been stated to be "by design".

Whilst the stated solution (manually keep track of whether the resource is initialised) works for a setup, it doesn't seem like you can apply that logic to the teardown?

My scenario is I am keeping a pool of Solr instances running in the background. Each pool is lazily initialised, and I have a OneTimeTearDown to clear up all created instances. This works, but the cleanup is being called after every test which means it can never reuse an item in the pool. This makes the tests run really slowly (it takes about twenty seconds to spinup/destroy a solr instance, multiply by 200 tests).

On the one hand, I like that tests are run in parallel in the same instance and can share access to the pool, but on the other hand it's not getting any benefit since the OneTimeTearDown is clearing the cache.

The "obvious" fix to me is that the OneTimeSetUp and OneTimeTearDown are only called once, regardless of how many tests are run in each domain, however I appreciate that this may be difficult. Is there any possibility of a "NoReallyOnlyTearDownOnce" mechanism that I can use to actually tear down the created solr instances? (I can then ignore the [OneTimeTearDown] when running in NCrunch). Failing that, do you have any suggestions for how I can safely share a pool of resources between tests, and correctly cleanup when testing is complete?
Remco
#2 Posted : Wednesday, December 27, 2017 11:13:28 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Hi, thanks for posting!

Effective cleanup of resources at the end of a test run is difficult to do, because of the nature of automated testing.

The key underlying problem is that there is no way to be certain that the cleanup code will have a chance to run. For example, if the test process experiences a critical failure (i.e. stack overflow, out of memory, unmanaged exception, etc), it will be immediately terminated by the O/S and the cleanup code won't have a chance to run. When using a tool like NCrunch to run your tests all the time, the chances of something like this happening are an order of magnitude higher than with an end-to-end manual runner. The nature of this problem is such that no additional feature or modification would allow NCrunch to work reliably in the way you're hoping for (regardless of the complexities of by-passing the NUnit integration, which are also problematic).

The best way to approach this is with a re-think. Instead of making tests clean up out-of-process resources when they finish, look at cleaning up these resources when the test starts its execution. So when the test starts its execution, it checks for orphaned Solr instances that aren't tied to any resident test processes and terminates them. In this way, you never encounter a scenario where the resources can leak. A side-effect of this is that you'll probably still some of them hanging around after you close down VS, though there are probably other mechanisms you could implement to handle this situation if it's important to you.
1 user thanked Remco for this useful post.
berkeleybross on 12/27/2017(UTC)
berkeleybross
#3 Posted : Wednesday, December 27, 2017 11:33:16 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 7/28/2015(UTC)
Posts: 6
Location: United Kingdom

Thanks: 4 times
Thanks for the quick response!

I see what you're saying. On the off chance you have experience with this kind of thing, would you recommend any strategies for detecting orphans? I like the idea that I can "adopt" orphans from previous processes but how would you identify which ones are "in use"? Would keeping a log file of active/abandoned instances on disk, with a suitable timeout period before a forced clean up be a reliable method?

Thanks very much
Remco
#4 Posted : Thursday, December 28, 2017 12:53:21 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
berkeleybross;11655 wrote:

I see what you're saying. On the off chance you have experience with this kind of thing, would you recommend any strategies for detecting orphans? I like the idea that I can "adopt" orphans from previous processes but how would you identify which ones are "in use"? Would keeping a log file of active/abandoned instances on disk, with a suitable timeout period before a forced clean up be a reliable method?


This depends a bit on how the processes are launched by your own code. The key piece of information would be the Process ID. If you can find a way to identify the process ID of the Solr instance being used by your test run, then you can store this somewhere in a globally accessible location (i.e. file system, memory mapped file, etc).

Basically you'd get the PID of the Solr instance, tie this up with the PID of your test runner process (Process.GetCurrentProcess()), then put these values together as an entry in your global state. At the start of every test run, the test goes and reads from this state and enumerates all entries, comparing these with the output from Process.GetProcesses(). Test processes that are listed in the global state but aren't showing up in Process.GetProcesses() must be from tests that have finished, and you can safely kill their related Solr process instances.

The hard part may well be getting the PID of the Solr process instance. If the API doesn't give you this, there is still a messy way to get it. You can infer the value from the timing of the Solr process being launched. For example:

1. Check the list of running processes, remember all the Solr ones
2. Initialise Solr process
3. (In a loop) Repeatedly query the list of running processes, comparing against the list from 1. until a new process shows up that isn't known
4. Take note of the PID of the new process

The above gets more complicated when running tests in parallel, because they can interfere with each other. So you'd need to wrap this up using System.Threading.Mutex to make sure only one test is launching Solr at any one time.

Of course, if Solr just has a .GetProcessId(), then it's much easier :)
1 user thanked Remco for this useful post.
berkeleybross on 12/28/2017(UTC)
berkeleybross
#5 Posted : Thursday, December 28, 2017 11:49:48 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 7/28/2015(UTC)
Posts: 6
Location: United Kingdom

Thanks: 4 times
Perfect, thankyou
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.041 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download