I can export the NCrunch Tests window for you. But it appears that there is never any difference between the "Last Execution Time" and "Expected Execution Time" for any individual test. Also, I never see any execution times for fixtures on this window that seem to make any sense. I would expect them to be the sum of the individual tests, but they're always just values in the range of a few seconds, i.e. a value even lower than many of the individual test times.
It's on the NCrunch Processing Queue window that I get results that look to be more accurate, although, again, that window shows greatly differing results for Expected vs. Actual Processing Times.
The above is the result of running all of the tests in a particular project (I just clicked on the project name and Run selected test(s) in new process). This project contains 7 different fixtures, with a total of 973 tests, according to NCrunch. Three of those fixtures are as described in my previous post, inheriting from a single base class. Happily, most of those tests were well distributed, not just in running in different tasks, but also distributed across many nodes. However, one of those tasks contains 285 tests (what looks like an entire fixture). It was expected that this fixture would take 52 minutes to complete. It was accurately reported as taking about 30. The result of this was that most of the tests completed within probably 5-10 minutes. Then, a single task/thread spun and ran for an additional 20 minutes before I could receive any feedback on any of those 285 tests, while 40+ potential processes sat idle. I can confirm that all of these tests were passing previously, so there should probably not be much concern that the times were calculated to be artificially low (as a result of incorrectly fast-failing in a previous run).
Now, independent of whether it's correct that those 285 tests
actually took 52 minutes to complete in a previous run, I would in no way expect that the engine might decide that we would want to run all of those tests sequentially, in a single task,
especially if it thought it would take that long. Also, this is one of those 3 fixtures that inherits from the base test fixture. The other two fixtures had their 285+ tests distributed among different tasks and different nodes (although that's often not the case). So it doesn't seem like it could be the structure of those test classes that could be the cause of this, as they all share the same one. Also, these are pretty strictly "unit tests", in that database, file system, etc. are stubbed with in-memory implementations.
If we actually care about what the expected and actual processing times are: this is a single test run on a single project. Usually, these tests will be running with probably a few thousand other tests, including integration tests that will be hitting the file system, database, ElasticSearch index, and other slow infrastructure. So I could potentially see the system load alone being responsible for the tests taking much longer when all run together.
So...maybe the execution times are correct? I'm not sure it matters. From what I'm seeing in this test run, I respectfully submit that there may just be something in the batching logic that needs refining.
I do not use AtomicAttribute anywhere.