I've completed my investigation of this problem and concluded that it is, unfortunately, by design.
Xunit v2 has a concept known as pre-enumeration, which basically involves the granular discovery of theory test cases during test discovery. Pre-enumeration allows NCrunch to detect theory test cases well in advance of their execution, in turn allowing them to be shown in the tests window, split between test tasks, run in parallel, distributed, etc.
AutoFixture specifically disables pre-enumeration under Xunit2, for good reason. The data AutoFixture inserts into test case parameters is not 'stable'. For example, it may contain GUIDs and other inconsistent data. When tests are discovered prior to execution, they must have consistent parameters that can be used uniquely identify them so that the data from their later execution runs can be tied back to the test. Simply speaking, theory test cases using AutoFixture cannot have their data correlated between their discovery and execution.
The reason this problem is visible under NCrunch but not under other runners is because NCrunch handles the lifespan of a test very differently. If you run an AutoFixture theory using the Xunit2 console runner, you'll find that the console runner will still report the individual test cases inside the execution output. For a console runner, it is a simple thing to just report the test run as it happens, as there is no requirement for the reported tests of the run to be an exact match of what may have been discovered earlier.
Likely other test runners simply dynamically generate new test cases to hold the results as they encounter them during execution. Unfortunately, such a solution is fundamentally incompatible with a number of NCrunch's features, such as parallel execution, dynamic execution order and distributed processing. NCrunch relies on having a 'strong' model in which each test is unique and have its data correlated across all execution runs. The idea of a 'dynamic' test case that could have a different signature on each execution run is fundamentally incompatible with these concepts as they are implemented in the NCrunch engine.
A possible solution would be to introduce a 'cosmetic' approach of bundling up the results inside sub-tests so that they can be reported under the main theory inside the Tests Window. While this might seem simple at first glance, there is huge hidden complexity here - for example, the individual sub-tests would need to have their own uniquely reported code coverage and performance data. It wouldn't be possible to run them individually outside of their main theory (as they cannot be uniquely identified), so many of the UI commands for them simply wouldn't work. They couldn't be ignored using NCrunch's 'ignored tests' mechanism, as there is no reliable way to distinguish them from their siblings. Basically, it would be a horrible nightmare that would create a confusing and complex result, and simply isn't worth pursuing for such an edge case.
So I'm sorry, but this feature cannot be supported by NCrunch.