Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Running only impacted tests
alexidsa
#1 Posted : Sunday, June 24, 2012 8:51:45 PM(UTC)
Rank: Newbie

Groups: Registered
Joined: 2/14/2012(UTC)
Posts: 8

I have read Greg Young's feedback on NCrunch approach to run all (not only impacted) tests: http://codebetter.com/gr...and-continuous-testing.

According to my experience Greg is correct. While NCrunch parallelization is implemented in a great manner, at some point of the project I am working on, NCrunch started to take too much time to execute all tests (about a minute). It was quickier for me to run current test class tests manually instead of waiting till NCrunch finishes running all tests (yep, I could miss some tests, especially integration tests, with this approach). MightyMoose has solved this problem and as result our team had to switch to it.

What do you think about it? Do you have plans to implement this feature in NCrunch?
Remco
#2 Posted : Monday, June 25, 2012 1:06:29 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
NCrunch does have this as a feature, although the approach is somewhat different to Greg's tool.

By default, NCrunch uses test impact detection in order to establish the 'priority' of a test rather than whether or not it should be run. As NCrunch is a truely continuous test runner, it isn't necessary for a test run to fully complete before you can make a meaningful decision about whether your code is working correctly.

As an example, one project I once used NCrunch very heavily on had a total running time of around 2 hours for all tests in the solution. To run the tests manually using any other test runner was extremely painful, as it involved carefully selecting the tests that you felt were relevant to the piece of code you'd been responsible for changing. If you managed to get this wrong, your check-in would break the build. We could take a broader approach of waiting until the full 2 hours of tests finished running, though this would then increase the risk of merge conflicts (as the actively developing team was quite large). NCrunch was extremely valuable on this solution as the impact detection would always prioritise the impacted tests first, so we could just keep making changes to the code and use the risk graph or the solidity of the coverage markers to gain an indication of when it would be safe to check code in. In the rare event that a test wouldn't be run early enough to avoid checking in broken code, NCrunch would always still bring up the failed test shortly after check-in (as part of its usually continuous running) so it would be possible to take corrective action much earlier.

True continuous testing massively changes the way we approach test driven development. Instead of looking at safety of working code as being absolute on a full test run, we now have the option to look at safety as being relative based on the progress of a test run. The primary purpose of a good continuous testing tool is to give you the most relevant feedback as early as possible, so you minimise wasting time for tests to run. Although the context of Greg's blog post doesn't imply this, Greg and I actually fully agree on the concept of 'Good enough' and I feel this is an important perspective to adopt in a world full of long test pipelines and short development schedules. For this reason I have actually long considered seriously revising the concept of the risk/progress graph (i.e. the fact that it has an 'end'), though many of NCrunch's users are now very used to it and would likely object to me changing it.

But anyway, regardless of my own opinion or perspective, NCrunch is itself a tool designed to solve a very clear set of problems. So yes, it also supports the ability to run only impacted tests. You'll find an engine mode available in the top menu ('Run impacted tests only') that will do just this - or you can create your own custom engine mode that makes use of the impact flag in deciding which tests to run.

There are also other ways you can give NCrunch clues to which tests you feel are more important to the code you're changing. Any test 'pinned' to the tests window will automatically receive a large priority boost. You can also run covering tests with a high priority by using an easily accessible keyboard shortcut.

I hope this helps!


Cheers,

Remco
alexidsa
#3 Posted : Monday, June 25, 2012 4:24:34 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 2/14/2012(UTC)
Posts: 8

Hm... I don't understand why you treat "run all tests" as a default behaviour. I don't get why we need to run all those unimpacted tests, in 99.999% it gives us nothing.

Thank you for pointing out "run only impacted tests" engine mode, I didn't see it before (did this feature appear recently?). Shouldn't engine mode choice be a part of Configuration Wizard? I have made an interesting experiment: enabled both MightyMoose and NCrunch (both in "run only impacted tests" mode) and watched how many tests they run for same code changes. And I would say the amount of tests differs in many cases :)
Remco
#4 Posted : Monday, June 25, 2012 8:06:56 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
It can be difficult to set the defaults in a way that works well for everyone. Generally I try to err on the side of caution as people are often quick to pass judgement on tools by their default configuration, and there is still a chance that the impact detection could result in tests not being run when they should be run.

The engine mode choice does appear as part of the configuration wizard - including an option for 'Run impacted tests only'. It is still a fairly new feature (introduced in 1.38b back in March) and as such it has yet to reach full maturity.

The 'Run all tests' option makes more sense when you look at test running from a continuous perspective rather than selective one. Even if there is only a 1% chance of impact detection failure, it can often still be worth running the other tests if the system resources are abundantly available. While no amount of processing done on any computer is ever completely 'free', it can still usually be done in a way that does not cause loss of performance or user interruption. Of course, individual results on this may vary depending up on the specification of the computer, the project being worked on, and the configuration of NCrunch. This is why the engine mode setting exists with so many customisation options.

It's my understanding that Mighty Moose uses a form of static analysis to identify tests that may be impacted by a change. NCrunch works using dynamic analysis with a bias towards performance over accuracy (NCrunch's impact detection is very fast), so you may notice widely different results between the two tools.
alexidsa
#5 Posted : Tuesday, June 26, 2012 5:27:47 AM(UTC)
Rank: Newbie

Groups: Registered
Joined: 2/14/2012(UTC)
Posts: 8

Thank you for detailed answer. I got your point. Just one thing I wanted to clarify: what do you mean by dynamic analysis. I kind of understand how static analysis could work here: we see a modified code, go throught the graph and find all tests which depend on it (of course, it is a simplified representation). But how does dynamic analysis work here?
Remco
#6 Posted : Wednesday, June 27, 2012 9:42:34 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,161

Thanks: 964 times
Was thanked: 1296 time(s) in 1202 post(s)
It may depend upon who you ask, but my definitions are as follows:

Static analysis: Analysis of the code through parsing or metadata loading in order to establish the behaviour of code without actually executing it. Using this method of analysis for identifying impacted tests can be very powerful as it can give more predictable results when working with tests that have inconsistent behaviour (i.e. if the tests rely on static constructors or they make use of the Random class).

Dynamic analysis: Analysis of the code through execution and profiling. Using this method of analysis for identifying impacted tests means recording the behaviour of the test over its previous execution and using this captured data to identify areas of the codebase that may impact the test when changed.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.049 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download