To a limited extent, yes.
NCrunch's built-in performance analysis does allow algorithms to be inspected line-by-line for their performance. In theory, if you wrote two tests running two different algorithms, the code coverage should report bottlenecks and the test execution times could give you a ballpark on the speed of the algorithms and how they compare.
Where this starts to fall down is that NCrunch is restricted to running code only in debug mode (i.e. not release mode). This means that where algorithms make use of compiler optimisations, the performance analysis will not be as accurate. The instrumentation used for the analysis is also quite heavy for frequently executed code and this will likely distort the metrics, as algorithms that execute more lines of code will be significantly worse off than others.
So I would have to say that NCrunch is useful for evaluation the performance of algorithms and discovering their bottlenecks, but I would recommend against using it for making important decisions or publishing comparisons without taking account of the above limitations.
At the moment, NCrunch doesn't have any features that measure memory allocation or resource efficiency. In this area, it probably isn't more useful than any other test runner.
I hope this helps!
Cheers,
Remco