The nature of this problem is such that it's quite hard to fully understand everything that's happening, but I suspect there might be a couple of mechanisms interacting here that are causing some confusion.
The first is that the state of fixtures in the Tests Window isn't simple. A fixture is not a 'real' test in the sense that it cannot exist on its own, but it actually has its own state that is both separate from its child tests and also dependent on them.
Picture a fixture with two tests:
-FixtureX
- Test1
- Test2
If you choose to run only Test1, FixtureX will be executed as part of the test run. If the methods specific to FixtureX fail (such as OneTimeSetup, TestFixtureSetUp, etc), both FixtureX and Test1 are reported as failed, even if Test1's test method passed. If Test1 fails but FixtureX passes, the behaviour depends on the test framework, but generally FixtureX is considered failed with a message saying that the child test failed.
Now, if after this, you choose to run Test2 in isolation, FixtureX will need to be run again for this test and the results will replace whatever was reported for the run of Test1. So if both FixtureX and Test2 pass, you can see FixtureX as reporting a pass even if Test1 is still failing. This is because we can only report one fixture result, and the most recent one always wins.
The above can create some confusion when selectively running tests, particularly with the 'Run all tests visible here'.
The second thing that I believe is causing confusion is the behaviour of NCrunch when set to run impacted tests only (via engine mode or config setting). The profiling system used for impact detection cannot consider changes that are external to compiled code. So things like resource files and database changes won't trigger the engine to run related tests. This is just a limitation of the technology. For NCrunch to understand when externals change, it needs different forms of integration to handle it. It's possible to compensate for this limitation by marking affected tests with CategoryAttribute and specifically including them for execution in the engine mode config regardless of whether they are impacted (obviously this has performance implications). If you're being tripped up a lot by this, I suggest using the 'Run all tests automatically' engine mode instead.