GreenMoose;7980 wrote:
Ok so if I understand this correctly:
For project A, NCrunch test runners etc. will create X workspaces for it. Throughout the lifetime of the vstudio process NCrunch will create additional Y work spaces for it (for whatever reason) and when it reaches some limit at Z work spaces (for whatever reason) for it it will start to reuse the oldest workspace and iterate the process over and over again?
Not exactly - but close.
Workspaces are created only when they are needed, and no unused workspaces can be updated to fill the need.
The following conditions can cause a workspaces to be cleaned up:
1. The related project has been unloaded (includes when the solution is closed and/or the engine is shut down)
2. The related project is structurally changed (i.e. you've made a change that has impacted the project XML, like adding a new file)
3. The context in which the workspace could still be useful has gone (i.e. on a grid node, the client has disconnected)
4. A new NCrunch process has started (includes grid node service) and existing 'stale' workspaces have been left behind by already terminated NCrunch processes. This can happen if the engine is unexpectedly terminated and wasn't able to clean up on exit, so we instead perform the cleanup in the next session.
There are two situations in which a workspace is considered 'in use' by the engine:
1. If it is being used by an NCrunch task process. Test runner processes will 'lock' all workspaces containing binaries that they have loaded into the test application domain.
2. If it is the most up-to-date ('primary') workspace for its project
So in theory, the max number of workspaces that can exist for a single project in your solution should be equal to the max number of task processes active at any one time, plus one (for the primary workspace). As NCrunch cannot use more than this number of workspaces at any one time, it won't construct them. Note that this is per-project, so if you have 5 projects in your solution, the max number of total workspaces is 5*(NumberOfActiveTaskProcesses+1).
On any decent sized solution with the engine ramped right up, that'll be a big number. In practice though, you'll usually see far less than this, because it's unusual that you'll be changing every project in your solution over the same session. If you have a project that is loaded into the session and then never changed, NCrunch will never create more than one workspace for this project.
GreenMoose;7980 wrote:
So in theory if I run vstudio process for a long time the work spaces should stabilize and the oldest created workspace should not have a code base reflecting the start of vstudio process since it by then should have been updated and re-used, right?
Generally yes. What tends to happen is when you first start the engine and start making changes, NCrunch will make lots of workspaces. As you continue to work, it will need to create progressively less as it is able to re-use workspaces more and more. Eventually, you'll reach a stage where the engine simply doesn't need to create any more workspaces to cater for the way you're changing the solution, and the number will plateau.
It's for this reason that I suspect that the 3GB you've allocated on your RAM disk is probably just shy of where it needs to be. If you do some math over your solution and number of active task runners, you'll likely find that it's quite a way short of the theoretical maximum used by NCrunch, but I would think it unlikely that you'll ever reach this maximum over a single session.
It is possible for you to have a workspace that contains very old code from early in the session, but this will only happen if you don't touch a project for a long time, then make a single change to it. I would think this an unlikely scenario as usually when we make changes to a project, we make quite a few of them at a time.
I find it's best to look at NCrunch's workspace disk consumption as being similar to how any runtime application consumes memory. Although as developers we do our best to keep memory consumption no higher than it needs to be, we usually have little strict control over the absolute memory consumption of an application, as this depends very much on how the application is used by the user. It's a natural expectation that an application being used to move mountains would require more memory to operate. We won't know how much memory is needed until we try to move the mountain. If the mountain is too big, we either buy more memory or we find a way to optimise.
Optimising NCrunch's workspace disk consumption is difficult because the need to operate with other toolsets (such as MSBuild) requires us to build workspaces with an identical structure to their project source code. This means the only way to reduce the disk consumption of workspaces is by making trade-offs (i.e. slowing down the engine or creating compatibility problems). The engine itself has no clearly understandable way to do this itself - imagine if the engine suddenly paused for 2 minutes because it needed to wait for workspaces to free up before it could run tests. Therefore when workspace consumption needs to be limited, the sensible thing is to just turn down the level of concurrency by reducing the number of processing threads or the size of the process pool.