GreenMoose;12806 wrote:
1) So in short, workspace disk usage will grow over time until it gets to a point where it always can find an unused existing workspace and reuse it, right?
Yes. The number of workspaces will be greatly affected by the number of execution threads and pooled processes you're running. If you have a large number of threads, it will take longer to reach the 'peak' workspace point for each project. The theoretical limit of workspaces can actually be very high if you have a large number of projects in your solution, but ordinarily change patterns will cause it to peak much lower.
GreenMoose;12806 wrote:
2) Can I somehow make safe assumptions on redundant workspace folders and remove the contents in them "scripted" from time to time (e.g. a workspace folder that is 3 days old can be deleted if it also exist in a workspace folder that is 3 hours old) ? Or will that "corrupt" the NCrunch state in some way?
NCrunch should always clean up the workspaces between sessions. There are two mechanisms to do this:
1. Workspaces should be cleaned up during a 'clean' exit of the engine. Not all exits are guaranteed to be clean, however.
2. So to compensate, when the engine starts, it will conduct a scan for orphaned workspaces and remove them.
With the current instability in the eco-system, I don't think I've ever had a session last as long as 3 days. If keeping the workspaces under control is important (i.e. when using a RAM disk), I'd suggest just resetting your NCrunch engine every so often to recycle your session and cause the workspaces to be cleaned up.
Avoid touching the workspaces while you have an NCrunch session in progress. This can cause the engine to become desynchronised with the workspace state and you'll likely end up with intermittent build errors.