Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Potential bug in metrics
ncrunchuser1
#1 Posted : Sunday, September 3, 2017 3:12:12 PM(UTC)
Rank: Member

Groups: Registered
Joined: 7/21/2015(UTC)
Posts: 13

Thanks: 5 times
Was thanked: 2 time(s) in 2 post(s)
Hello,

I think I've run into a bug in the metrics. I'm testing a project that uses ConditionalAttribute. Unfortunately, NCrunch doesn't allow running multiple configurations of the same project so I have to create two separate test projects, one with the #define and one without. In order to not duplicate code, the test steps (I'm using SpecFlow), are defined in one test project and added as linked files in the other test project. The coverage indicator dots correctly show that all the lines in the step files have been covered. However, in the metrics window, it only shows 70% coverage for one project and 90% for the other (paths not taken in one project are taken in the other).

Is this the expected behavior?

Thanks.
Remco
#2 Posted : Sunday, September 3, 2017 11:54:55 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,976

Thanks: 931 times
Was thanked: 1257 time(s) in 1170 post(s)
Hi, thanks for posting.

Yes, this is by design. When you share source files between projects, those source files are duplicated internally by NCrunch so that a fully new copy exists for each project. This is the only way such code can make sense, because code compiled in a different project can be physically completely different (for example, through the use of #if or ConditionalAttribute). The MS tool stack also does this under the hood. When the code coverage is presented in the code window, the separate results between the projects are internally merged into a single overlapping result set, because the VS code windows give a 'flat' view over all files shared between the projects.

The dimensions of the metrics window are different to the code windows, because these follow a hierarchical structure that is broken down by project. It doesn't make logical sense for a project to show code coverage that is related to a different project, so you get the 70% and 90% difference.

The idea of code coverage starts to become more subjective when applied in a multi-platform or cross-compiled scenario, because even though code may be 'covered' when tested under one platform/configuration, it may not always be safe to assume it is also 'covered' under a different one. For example, if you were to test your code only in one platform but compiled it for two, there is a risk of platform specific issues could get past your testing and show up in a production environment. These sorts of scenarios emphasise the limitations of using code coverage to determine overall test quality.
1 user thanked Remco for this useful post.
ncrunchuser1 on 9/4/2017(UTC)
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.033 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download