Thanks for these extra details.
For some reason, the engine running in the console tool seems to think that the 5793 tests have a very low execution time, and that it must therefore be optimal to run them in one batch.
I wonder if this may be caused by cached data being used by the console tool that is somehow incorrect and not being updated. Test execution times are stored in the NCrunch cache file (the path of which should usually be provided to the tool). If the cache file is unable to be read (i.e. due to version conflicts), the execution times are pulled from a secondary .executiontimes.cache file that should be stored in the same directory.
To confirm whether timings are the cause of this problem, the easiest thing to do is to remove/move/rename both of these files. In this way, the engine won't have any existing information on test timings and it should split all tests as though they have at least a moderate execution time.
It's also worth checking to make sure that these .cache files are not stored in your VCS/git. It's possible the console tool is using a 'fresh' copy of these files when pulling down the code, and that this version of the files happens to have strange timing data (for example, you may have had a test run where almost all the tests immediately finished due to now fixed bug in the code).