Hi, thanks for sharing your thoughts on this.
This is something that we've looked into before, but decided not to change at this time for two reasons:
1. It's surprisingly complex to do. In situations where solutions contain tests that target specific nodes (i.e. using 'capabilities'), it isn't always easy to know whether a full cycle has been completed just because tests are no longer being run. The engine is extremely configurable and even under the current circumstances it's not easy to know when a test run is fully complete. This was the hardest thing to do when engineering the console tool, because it's basically an adapted continuous runner. For example, there may be a particular test in the suite that is set to run on a specific grid node under specific parameters. Although it could be possible for us to check for such a test when attempting to terminate the run, such a check could involve quite a large amount of data and would be difficult to engineer without introducing a potential performance issue. It's possible, it's just hard to do and very hard to get right.
2. There are valid scenarios (admittedly rare) where one of the slow nodes may actually fail the run due to differences in its environment. In such a case, it would be useful if the result could be reported without truncating the run. For example, a user may have a grid node running on a different version of Windows that encounters a build issue or test discovery issue not experienced on the other nodes. Some people use grid nodes for cross platform testing.
For the time being, the rational thing is to examine your NCrunch timeline and identify whether nodes are adding value to a CI run. If not, take them out. Including them in the run when they never get to report anything useful would technically be a waste of resources anyway.