Besides execution counts I don't have any direct ideas for passive presentation.
Possibly some kind of code map with a heat signature. Not sure how useful that would be though.
(In my dreams the solution explorer, or files for that matter, has been replaced with a 3D-rendering of the application, 2D for module layouts, and 1D for layer separations, such that dataflow can be visualized as connections between layers, making an obvious place to hook in various visualization probes for debugging)
In a recent WPF application we developed I depended mostly on logging for post-execution alaysis, as you say. Which is a nice way to discover such oddities as code being executed more than necessary. Here are some of the strategies I've adopted to make this effective
* Add tracing to both Get and Set implementation of properties (using a common base class capturing [CallerMemberName])
* Add tracing to events and Rx.OnNext (this could probably be done as a custom scheduler)
* Add visible headers (lots of whitespace) to the trace for significant events, mostly top level calls (SpecFlow step -> PageObject method called -> ICommand executed)
* Add a simple instance id (static int++ on new) to use as prefix for tracing (ClassName + instanceid + method called + arguments)
- This is also used to trace lifecycle events such as new and dispose. I had the finalizer report undisposed instance to track down leaks. Had to force a bunch of GC calls and such at the end of each test to not have those reported in the wrong test though.
* Prefix all traces with threadid
* Prefix all traces with time (offset from start is enough, but implemented it in such a way that it usually measures offset from NCrunhc process start, which is less useful)
- I find this is mostly used to find performance issues, so it would be much better to have this presented as execution time in the usual profiler tree
In the end it's lots of data to wade through, so some means of selecting interesting subsets for a particular debugging sessions would be useful. I guess the common approach is to use a combination of trace level and logger instance to achieve this, didn't get around to it in this project. One coverage filter that could be interesting would be to only show coverage reachable from a particular method, just like you can show only coverage for a particular test. However in an age of async, TPL, Rx and event handler the notion of "reachable from a method" might be as straight forward as keeping track of the stack.
WRT performance I don't think a stack trace for each call is really necessary or even useful for that matter. Instead it might be better to keep track of your own execution graph and just add nodes to the graph as explicitly requested by the programmer. Not sure how that could be achieved though, CorrelationManager from System.Diagnostics?
WRT to capturing traces I resorted to ToString() overrides to get useful data. But I guess with something like semantic logging you'd rather extract raw data instead, which still has to be persistable though. Maybe the programmer could supply a Func<T,object> to the tracing definition as needed.
Another idea that just popped into my head is to attach test case data to tested units. Let's say you write a method int CalculateSomething(int arg1, int arg2), besides displaying that the method is covered by a test, how about folding out actual test data in the editor? Some options:
* A table of arguments and expected results and the actual result
* A REPL worksheeet such as the one from Scala
https://github.com/scala...et/wiki/Getting-Started or along the lines of Linq pad
* Visualization such as
http://conal.net/papers/Eros/