Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

NCrunch not cleaning old workspaces, still an issue?
GreenMoose
#1 Posted : Tuesday, November 10, 2015 4:03:56 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 6/17/2012(UTC)
Posts: 507

Thanks: 145 times
Was thanked: 66 time(s) in 64 post(s)
Regarding cleanup up workspaces (looking at the 3 year old post at http://forum.ncrunch.net...-going-up.aspx#post1072 it seems NCrunch does not clean that agressively while continuously running?).

On some grid nodes and locally I use a RAMDrive to store the workspaces on. Eventually this always tend to become full, and the more often I alter a larger test project the quicker it will fill up.

Currently it is 3GB and it is holding for about 1 hour of work heavily dependent on where I do my changes it seems, which is a bit frustrating.

I fired up WinDirStat and noticed the following "sequence" http://screencast.com/t/XWZwcH8J82 of duplicated binaries etc.

The largest file is the test project's.pdb that is duplicated in a lot of place.
My oldest workspace folder has number "1" (created roughly 1 hour ago when I restarted vstudio) and the newest 196.

If I use BCompare to diff the newest (184) vs oldest (34) workspace folders (for the test-project taking up most space) I see that the folder structure between the 2 are identical ecxept 3 files (2 cs and 1 csproj). These files are outdated (over 1 hour old) and I can't understand why they need to lay around.

Is the "cleaning behavior" described in above 3-year-old post still applicable so NCrunch doesn't wipe old workspaces until I actually re-set the engine or is this some kind of bug?

Thanks.
Remco
#2 Posted : Tuesday, November 10, 2015 10:07:56 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Hi, thanks for sharing this issue.

NCrunch is militant with cleaning up workspaces, it has a very well tested routine to handle this, then a number of fallbacks in case the routine fails. If you're seeing the workspaces overstepping the allocated size of your RAM disk, this is more likely to be because the workspace requirements of the engine are more than the amount of space allocated for the RAM disk.

These workspaces look pretty large (50MB-100MB is large for a single project workspace). It might be worth looking through to see if there is anything included in the workspace (NuGet packages perhaps?) that doesn't need to be there.

The number of workspaces needed by the engine is dependent upon the number of task runner processes active at any one time. This is in turn controlled by these two configuration settings:
http://www.ncrunch.net/documentation/reference_global-configuration_max-test-runners-to-pool
http://www.ncrunch.net/documentation/reference_global-configuration_max-number-of-processing-threads

The best option would be to see if you have any way of increasing the size of your RAM disk. Since you're able to work for an hour without the node failing, you're probably very close to the maximum space NCrunch needs.
GreenMoose
#3 Posted : Tuesday, November 10, 2015 11:20:41 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 6/17/2012(UTC)
Posts: 507

Thanks: 145 times
Was thanked: 66 time(s) in 64 post(s)
The largest file in the workspace folders is the test project's pdb file (18MB). But what is the purpose of NCrunch keeping workspaces with outdated code in them? If the workspace was in use shouldn't the code within it reflect the latest version?
Remco
#4 Posted : Tuesday, November 10, 2015 11:33:05 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
Once a workspace has been created, it is kept, because there is a high chance it will be used again in future.

Creating workspaces is very expensive, especially if they're big. There is a limit to how many workspaces NCrunch will create, as it can only use so many at one time and will always update+reuse where existing ones are available.
GreenMoose
#5 Posted : Tuesday, November 10, 2015 11:51:10 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 6/17/2012(UTC)
Posts: 507

Thanks: 145 times
Was thanked: 66 time(s) in 64 post(s)
Remco;7979 wrote:
Once a workspace has been created, it is kept, because there is a high chance it will be used again in future.
Creating workspaces is very expensive, especially if they're big. There is a limit to how many workspaces NCrunch will create, as it can only use so many at one time and will always update+reuse where existing ones are available.

Ok so if I understand this correctly:
For project A, NCrunch test runners etc. will create X workspaces for it. Throughout the lifetime of the vstudio process NCrunch will create additional Y work spaces for it (for whatever reason) and when it reaches some limit at Z work spaces (for whatever reason) for it it will start to reuse the oldest workspace and iterate the process over and over again?

So in theory if I run vstudio process for a long time the work spaces should stabilize and the oldest created workspace should not have a code base reflecting the start of vstudio process since it by then should have been updated and re-used, right?
Remco
#6 Posted : Wednesday, November 11, 2015 3:23:50 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
GreenMoose;7980 wrote:

Ok so if I understand this correctly:
For project A, NCrunch test runners etc. will create X workspaces for it. Throughout the lifetime of the vstudio process NCrunch will create additional Y work spaces for it (for whatever reason) and when it reaches some limit at Z work spaces (for whatever reason) for it it will start to reuse the oldest workspace and iterate the process over and over again?


Not exactly - but close.

Workspaces are created only when they are needed, and no unused workspaces can be updated to fill the need.

The following conditions can cause a workspaces to be cleaned up:
1. The related project has been unloaded (includes when the solution is closed and/or the engine is shut down)
2. The related project is structurally changed (i.e. you've made a change that has impacted the project XML, like adding a new file)
3. The context in which the workspace could still be useful has gone (i.e. on a grid node, the client has disconnected)
4. A new NCrunch process has started (includes grid node service) and existing 'stale' workspaces have been left behind by already terminated NCrunch processes. This can happen if the engine is unexpectedly terminated and wasn't able to clean up on exit, so we instead perform the cleanup in the next session.

There are two situations in which a workspace is considered 'in use' by the engine:
1. If it is being used by an NCrunch task process. Test runner processes will 'lock' all workspaces containing binaries that they have loaded into the test application domain.
2. If it is the most up-to-date ('primary') workspace for its project

So in theory, the max number of workspaces that can exist for a single project in your solution should be equal to the max number of task processes active at any one time, plus one (for the primary workspace). As NCrunch cannot use more than this number of workspaces at any one time, it won't construct them. Note that this is per-project, so if you have 5 projects in your solution, the max number of total workspaces is 5*(NumberOfActiveTaskProcesses+1).

On any decent sized solution with the engine ramped right up, that'll be a big number. In practice though, you'll usually see far less than this, because it's unusual that you'll be changing every project in your solution over the same session. If you have a project that is loaded into the session and then never changed, NCrunch will never create more than one workspace for this project.

GreenMoose;7980 wrote:

So in theory if I run vstudio process for a long time the work spaces should stabilize and the oldest created workspace should not have a code base reflecting the start of vstudio process since it by then should have been updated and re-used, right?


Generally yes. What tends to happen is when you first start the engine and start making changes, NCrunch will make lots of workspaces. As you continue to work, it will need to create progressively less as it is able to re-use workspaces more and more. Eventually, you'll reach a stage where the engine simply doesn't need to create any more workspaces to cater for the way you're changing the solution, and the number will plateau.

It's for this reason that I suspect that the 3GB you've allocated on your RAM disk is probably just shy of where it needs to be. If you do some math over your solution and number of active task runners, you'll likely find that it's quite a way short of the theoretical maximum used by NCrunch, but I would think it unlikely that you'll ever reach this maximum over a single session.

It is possible for you to have a workspace that contains very old code from early in the session, but this will only happen if you don't touch a project for a long time, then make a single change to it. I would think this an unlikely scenario as usually when we make changes to a project, we make quite a few of them at a time.

I find it's best to look at NCrunch's workspace disk consumption as being similar to how any runtime application consumes memory. Although as developers we do our best to keep memory consumption no higher than it needs to be, we usually have little strict control over the absolute memory consumption of an application, as this depends very much on how the application is used by the user. It's a natural expectation that an application being used to move mountains would require more memory to operate. We won't know how much memory is needed until we try to move the mountain. If the mountain is too big, we either buy more memory or we find a way to optimise.

Optimising NCrunch's workspace disk consumption is difficult because the need to operate with other toolsets (such as MSBuild) requires us to build workspaces with an identical structure to their project source code. This means the only way to reduce the disk consumption of workspaces is by making trade-offs (i.e. slowing down the engine or creating compatibility problems). The engine itself has no clearly understandable way to do this itself - imagine if the engine suddenly paused for 2 minutes because it needed to wait for workspaces to free up before it could run tests. Therefore when workspace consumption needs to be limited, the sensible thing is to just turn down the level of concurrency by reducing the number of processing threads or the size of the process pool.
1 user thanked Remco for this useful post.
GreenMoose on 11/11/2015(UTC)
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.058 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download