Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Always-On Programming Visualizations
John Nilsson
#1 Posted : Saturday, May 3, 2014 8:33:54 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 3/9/2013(UTC)
Posts: 5
Location: Sweden

So this just crossed my radar: http://lambda-the-ultimate.org/node/4945

Has some interesting ideas that should be straight forward to implement in NCrunch

I particularly liked the idea of attaching structured trace logs as line annotations

Would be awesome to replace trace output from my acceptance tests with something easier to filter and navigate.
Remco
#2 Posted : Saturday, May 3, 2014 10:55:15 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,986

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Hi John,

This is, actually, one of the core ideas on which NCrunch was founded.

In the JS/Ruby arena there has been considerable experimentation in this area, with some really great ideas.

Because of automated tests, NCrunch can 'easily' capture this sort of information in a very tidy way. It's possible to capture values ranging from class members, to method parameters, to local variables...

The hard part is finding a tidy way to show it in the IDE. This is straightforward when looking at a barely tested piece of code, or when NCrunch has been set to show code coverage for a single test only.. But as soon as multiple tests get involved, everything becomes very noisy. So at the moment, this is mostly a UE issue. I think the key element that makes something like this successful is the way it can passively inform you of important things your code is doing. NCrunch's coverage markers do this in a very clean and simple way, such that you don't need to go looking for the information to have it there. It's harder to do this with every data point for every test run :) I'd love to hear your thoughts on it.
John Nilsson
#3 Posted : Sunday, May 4, 2014 12:50:06 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 3/9/2013(UTC)
Posts: 5
Location: Sweden

I'm already using NCrunch in the way they describe to see coverage when debugging a single test. The indications of where time is spent is also very appreciated. Adding a simple counter as in http://alltom.com/pages/...14/images/slide.010.jpg could be interesting to see if it adds any value. If running multiple tests I guess a tooltip or such could break it down into individual tests.

Just now I was thinking of how to get this information recorded for a manual execution. But I guess that's simply a matter of launching "main" from an otherwise ignored test.


For the structured trace as in http://alltom.com/pages/...14/images/slide.013.jpg I was thinking that instead of, as in their case, putting everything in one output window you would navigate by using the src lines as bookmarks into the call graph.

One version of this might be to add 'probes' which NCrunch would add to the code and record, so you could examine recorded values much as you would in the debugger when execution is halted, only with the option to navigate in time. Adding a probe would, as they say, be similar to adding a Debug.WriteLine or such, only it would only be a click, and would require a recompilation. But I guess I would like it to end up in version control, and possible usable outside of NCrunch.

I guess I was a bit inspired by http://blogs.msdn.com/b/...g-semantic-logging.aspx and the possibility to use it as (- BROKEN LINK -)

I do think that a UI for navigating and visualising execution traces is probably out of scope for NCrunch, but could certainly be made as a separate product with tight integration.
Remco
#4 Posted : Sunday, May 4, 2014 1:38:30 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,986

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Thanks for sharing this, it's helped to articulate a particularly interesting fact. The recorded activities of software, when analysed post-execution, can be tremendously more insightful than data that can be captured by a debugger, which can only show state at a particular point in time.

So in my mind, the opportunities here are centered very much around how this data can be meaningfully reported. I believe that there are two kinds of reporting that can be done here:

Passive reporting - In the same manner as NCrunch's inline code coverage and performance 'hot spots', some elements of data can clearly be shown to the user without the user needing to look for them. This is very powerful because it allows the user to become aware of things their code is doing that may be entirely unexpected. It also reduces the burden on needing to click through extensive UIs to try and figure out what's happening. The big downside of passive reporting is that it's very hard to aggregate complex behaviour into small unobtrusive elements. There will also be limits on how much data can be passively reported without creating noise or confusion.

Active reporting - I guess an example of this would be NCrunch's reporting of exception stack traces by clicking on a red X marker. There's too much data to show here passively, so we basically provide a drill-down option. NCrunch doesn't report much data actively, and as you've highlighted, there is HUGE room for more development in this area.

My preference is to try and find a way to passively report data before resorting to active reporting. I believe this creates a cleaner 'flow' when working with code. Of course, this isn't always possible.

There are always constraints around the volume of data that can be collected during any test run... along with the impact on test performance that comes with recording lots of stuff. I think there's plenty of ways to mitigate these problems though. Since the V2 release, the NCrunch engine has ample capacity for additional load. Distributed processing may also help us here (or hinder us, depending on network data transfer limitations).

So to me, the biggest questions are:

1. What data do we want to capture?
and 2. What is the most convenient way to report it?

Ideas for data that can be captured:
- Code coverage (already done, although this could be extended to capture the coverage density for individual lines)
- Performance information (already done)
- Test trace output (already done)
- Memory consumption/traffic
- Stack traces (i.e. recorded at top of every method)
- Static and class members
- Local variables
- Method parameters
- Method return values

The ELM concepts you've linked to have some more interesting ideas for data that can be captured, although as NCrunch is not UI centric and must treat tested code very generically, these are probably not achievable with the current design.

What are your thoughts?
John Nilsson
#5 Posted : Sunday, May 4, 2014 2:50:15 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 3/9/2013(UTC)
Posts: 5
Location: Sweden

Besides execution counts I don't have any direct ideas for passive presentation.
Possibly some kind of code map with a heat signature. Not sure how useful that would be though.
(In my dreams the solution explorer, or files for that matter, has been replaced with a 3D-rendering of the application, 2D for module layouts, and 1D for layer separations, such that dataflow can be visualized as connections between layers, making an obvious place to hook in various visualization probes for debugging)


In a recent WPF application we developed I depended mostly on logging for post-execution alaysis, as you say. Which is a nice way to discover such oddities as code being executed more than necessary. Here are some of the strategies I've adopted to make this effective

* Add tracing to both Get and Set implementation of properties (using a common base class capturing [CallerMemberName])
* Add tracing to events and Rx.OnNext (this could probably be done as a custom scheduler)
* Add visible headers (lots of whitespace) to the trace for significant events, mostly top level calls (SpecFlow step -> PageObject method called -> ICommand executed)
* Add a simple instance id (static int++ on new) to use as prefix for tracing (ClassName + instanceid + method called + arguments)
- This is also used to trace lifecycle events such as new and dispose. I had the finalizer report undisposed instance to track down leaks. Had to force a bunch of GC calls and such at the end of each test to not have those reported in the wrong test though.
* Prefix all traces with threadid
* Prefix all traces with time (offset from start is enough, but implemented it in such a way that it usually measures offset from NCrunhc process start, which is less useful)
- I find this is mostly used to find performance issues, so it would be much better to have this presented as execution time in the usual profiler tree

In the end it's lots of data to wade through, so some means of selecting interesting subsets for a particular debugging sessions would be useful. I guess the common approach is to use a combination of trace level and logger instance to achieve this, didn't get around to it in this project. One coverage filter that could be interesting would be to only show coverage reachable from a particular method, just like you can show only coverage for a particular test. However in an age of async, TPL, Rx and event handler the notion of "reachable from a method" might be as straight forward as keeping track of the stack.

WRT performance I don't think a stack trace for each call is really necessary or even useful for that matter. Instead it might be better to keep track of your own execution graph and just add nodes to the graph as explicitly requested by the programmer. Not sure how that could be achieved though, CorrelationManager from System.Diagnostics?

WRT to capturing traces I resorted to ToString() overrides to get useful data. But I guess with something like semantic logging you'd rather extract raw data instead, which still has to be persistable though. Maybe the programmer could supply a Func<T,object> to the tracing definition as needed.




Another idea that just popped into my head is to attach test case data to tested units. Let's say you write a method int CalculateSomething(int arg1, int arg2), besides displaying that the method is covered by a test, how about folding out actual test data in the editor? Some options:
* A table of arguments and expected results and the actual result
* A REPL worksheeet such as the one from Scala https://github.com/scala...et/wiki/Getting-Started or along the lines of Linq pad
* Visualization such as http://conal.net/papers/Eros/

Remco
#6 Posted : Sunday, May 4, 2014 10:47:27 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,986

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
There's some great ideas here.

My dreams of an evolved IDE share much in common with yours. I was also thinking about making use of Oculus at some stage, or something similar. A VR view of a codebase would give much more room for visualising code behaviour. I think this is still a long shot, but it's an interesting concept.

I appreciate you sharing your ideas in this area. I think there's a big menu of new features to choose from. Let's see where this leads :)
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.069 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download