Thanks for sharing all this detailed information. Given everything you've provided, I think I can firmly conclude that the problem here is caused by the size of NCrunch's in-memory code coverage database, which is held inside the engine host process.
The coverage database is the most complex area of the product, and has been revised several times since V1. NCrunch needs to store a vast amount of code coverage data and be able to rapidly merge this in real-time from many background test runs. For this system to work, we're right now allocating about 60 bytes of memory for every covered line of code for each test. So if you have 28821 tests and each one covers 1% of a 655822 line codebase, we'd be looking at 28821*655822*0.01 = 189,014,459 data points or about 11.3GB of memory.
So I guess it adds up. With the current internal design of NCrunch and 28k tests with relatively high coverage density over a 656k codebase, 16GB of memory simply won't be enough to hold all coverage data in a fully indexed form and still have space for other essentials (such as VS itself, other metadata derived from the tests, background execution processes, etc). It may be possible to stretch this limit through additional paging/swapping to disk, though if this is possible you'll end up paying a price for it in performance.
We're always looking to improve the performance of NCrunch's coverage system, but there's no low hanging fruit left in this area. To further optimise we'll need to do some significant redesign, which would involve considerable time and risk. Assuming we find a way to push this further, it's highly unlikely we'll be able to do so within the next year.
Given the above, I can only suggest the following options:
- Increase the available memory on your dev workstations and CI server. Considering the cost of developer time and the availability of hardware, this is the option I would most recommend.
- Consider consolidating your tests. 28k tests is a very high number for the size of your codebase, so my assumption is that many of these tests are multi-dimensional/generated test cases (TestCase, TestCaseSource) or otherwise exist through inheritance structures. There may be opportunities to reduce the total number of tests by sacrificing some level of granularity in reporting their results
- A less appealing but totally possible option may be to take out coverage tracking over some parts of your solution using NCrunch's code coverage suppression comments or by turning off the 'Instrument Output Assembly' setting. The less coverage is tracked, the smaller the in-memory database will be.
- Breaking down your overall solution into sub-solutions could reduce the amount of code required to be indexed and will free up available memory. I don't expect it will be easy to do this and you'll probably take a productivity hit, but it might still be an option.
My experience suggests that the dimensions of your solution are at a critical point where your existing hardware will likely be suffering considerably with many tools in the .NET space. VS2019 alone requires about 4 logical cores to run on even a moderate sized solution without any serious performance hit, and adding resharper to this would leave very little room to move. I don't think you'd regret a hardware upgrade.
Updating NCrunch to V4 will improve performance across the board, but will not reduce the memory consumption of the code coverage database.