Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Console Runner, OpenCover and NUnit reports w/ SonarQube
MarcChu
#1 Posted : Thursday, May 29, 2025 10:32:08 PM(UTC)
Rank: Member

Groups: Registered
Joined: 9/26/2014(UTC)
Posts: 24

Thanks: 3 times
Was thanked: 1 time(s) in 1 post(s)
I have a solution with many .Net Framework 4.8 projects. I've been working on CI/CD automation, using Jenkins pipelines. I have a large suite of NUnit tests that I run, and am attempting to collect test and code coverage results, which I then pass to SonarQube for analysis/tracking, as well as Jenkins itself, using assorted plugins for visualization and such.

I had originally gone down the path of using OpenCover with the NUnit console application. I seemed to have gotten good results with this. However, it took a LONG time for the entire suite to complete (along with the coverage instrumentation)--nearly 3 hours for ~3700 tests.

It was then that I realized that we might be able to leverage NCrunch. Indeed, that seems to be what the NCrunch Console Tool is intended to do. So I introduced this into my pipeline, following all of the NCrunch documentation. I get an OpenCover.xml report and a TestResultsInNUnitFormat.xml report that I can send off for SonarQube, as well as visualize with my Jenkins plugins.

With what I'm seeing in all of these, I have open questions as to the generation of both reports.

First, I think the NUnit report you're generating may be based on an old/outdated format. The reason I think this is that, ever since NUnit got rid of its GUI runner, I've been using the website located here for opening up reports for visualization. On that site, I can't open the TestResultsInNUnitFormat.xml report. I searched around the web and found an application for viewing NUnit reports here that can open your report, but which blows up when trying to open a newer one (created with the NUnit console runner). This viewer has a copyright date of 2012.

Interestingly, the Jenkins plugin looks like it can handle this report well, though there does seem to be a discrepancy between the counts (and also the "Package") for tests that use TestCaseAttribute, TestCaseSourceAttribute, and perhaps others.

However, when SonarQube tries to parse the report, it chokes on it.

With SonarQube debugging on, when it attempts to parse the report generated by the NUnit console runner, I get many messages of the following:
18:23:13 18:23:13.803 DEBUG: Added Test Method: WOTI.Xift.Arbiter.UnitTests.WOTI.Xift.Tests.Arbiter.ArbiterConfigurationSettingsTests.CanGetADServiceAccount to File: E:\Jenkins\workspace\Code_Quality_PR-35\Tests\Arbiter.UnitTests\ArbiterConfigurationSettingsTests.cs

However, when it attempts to parse the NCrunch TestResultsInNUnitFormat.xml, they all look like the following:
16:04:41 16:04:41.403 DEBUG: Test method null.WOTI.Xift.Tests.Arbiter.ArbiterConfigurationSettingsTests.CanGetADServiceAccount cannot be mapped to the test source file. The test will not be included.

Not a single test can be added.

Just from the above, it seems that SonarQube is looking for the assembly name to prepend to the namespace -> class -> method, and this may be the entire problem.



As to the problem with my OpenCover report: when SonarQube tries to parse it, I get many messages (6714) of the following type:
16:04:38 16:04:38.161 DEBUG: Coverage import: Line 69 is out of range in the file 'Libraries/Core/Services/Program/IFullCaseDetailsService.cs' (lines: 68)
For this particular file, I get the error for lines 69-84.

If I look for this file in the OpenCover.xml, I find:
<File uid="900" fullPath="E:\Jenkins\workspace\Code_Quality_PR-35\Libraries\Core\Services\Program\IFullCaseDetailsService.cs" />

If I search the file for "900", I find many things in the report like this:
<Class>
<Summary numSequencePoints="17" visitedSequencePoints="17" numBranchPoints="0" visitedBranchPoints="0" sequenceCoverage="100" branchCoverage="0" maxCyclomaticComplexity="0" minCyclomaticComplexity="0" maxCrapScore="0" minCrapScore="0" visitedClasses="1" numClasses="1" visitedMethods="1" numMethods="1" />
<FullName>Xift.Core.Security.PermissionService+&lt;ProcessFieldPermissions&gt;d__7</FullName>
<Methods>
<Method visited="true" cyclomaticComplexity="0" nPathComplexity="0" sequenceCoverage="0" branchCoverage="0" crapScore="0" isConstructor="false" isStatic="false" isGetter="false" isSetter="false">
<Summary numSequencePoints="17" visitedSequencePoints="17" numBranchPoints="0" visitedBranchPoints="0" sequenceCoverage="100" branchCoverage="0" maxCyclomaticComplexity="0" minCyclomaticComplexity="0" maxCrapScore="0" minCrapScore="0" visitedClasses="0" numClasses="0" visitedMethods="1" numMethods="1" />
<MetadataToken>100671976</MetadataToken>
<Name>System.Boolean Xift.Core.Security.PermissionService+&lt;ProcessFieldPermissions&gt;d__7::MoveNext()</Name>
<FileRef uid="123" />
<SequencePoints>
<SequencePoint vc="1" uspid="32932" ordinal="0" offset="50" sl="67" sc="137" el="67" ec="138" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32933" ordinal="1" offset="51" sl="68" sc="13" el="68" ec="67" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32934" ordinal="2" offset="78" sl="69" sc="13" el="69" ec="96" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32935" ordinal="3" offset="140" sl="70" sc="13" el="70" ec="20" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32936" ordinal="4" offset="141" sl="70" sc="38" el="70" ec="104" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32937" ordinal="5" offset="207" sl="70" sc="22" el="70" ec="34" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32938" ordinal="6" offset="224" sl="70" sc="106" el="70" ec="107" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32939" ordinal="7" offset="225" sl="71" sc="17" el="71" ec="24" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32940" ordinal="8" offset="226" sl="71" sc="52" el="71" ec="126" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32941" ordinal="9" offset="293" sl="71" sc="26" el="71" ec="48" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32942" ordinal="10" offset="310" sl="71" sc="128" el="71" ec="129" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32943" ordinal="11" offset="311" sl="72" sc="21" el="72" ec="53" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32944" ordinal="12" offset="342" sl="73" sc="17" el="73" ec="18" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32945" ordinal="13" offset="350" sl="71" sc="49" el="71" ec="51" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32946" ordinal="14" offset="377" sl="74" sc="13" el="74" ec="14" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32947" ordinal="15" offset="385" sl="70" sc="35" el="70" ec="37" bec="0" bev="0" fileid="900" />
<SequencePoint vc="1" uspid="32948" ordinal="16" offset="415" sl="75" sc="9" el="75" ec="10" bec="0" bev="0" fileid="900" />
</SequencePoints>
<BranchPoints />
<MethodPoint xsi:type="SequencePoint" vc="1" uspid="32932" ordinal="0" offset="50" sl="67" sc="137" el="67" ec="138" bec="0" bev="0" fileid="900" />
</Method>
</Methods>
</Class>
<Class>

I assume it's the fileid that's pointing back to that file, and the "sl" and "el" attributes that are pointing to line numbers that are out of range.

So, it seems to me that the OpenCover report is not being generated correctly, or is somehow getting corrupted.

I've also been able to use an AltCover.Visualizer tool to open this report. If I traverse the tree looking for the above class and methods, clicking on it shows me the source code for the file with fileid 900 (IFullCaseDetailsService).

I did read the support post here, which seems rather similar, though not exactly. But there are a lot of moving parts to this part of my issue, so I want to be deliberate with a solution. So I have not yet tried anything as specified in that post, e.g., set DebugType to portable.

One of the issues is that I do use a Grid Node Server to distribute my tests during my CI build, and I wonder if that might be something that's screwing this up. (ChapGPT seems to think so.)

Any thoughts on the above are appreciated.
Remco
#2 Posted : Friday, May 30, 2025 12:31:17 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,298

Thanks: 987 times
Was thanked: 1325 time(s) in 1228 post(s)
Thanks for sharing this issue in such detail.

You are correct in your assessment that the NUnit export format is very old. NCrunch's NUnit export was originally implemented well over 10 years ago (I think it was somewhere around 2012-2013) and was based off the structure of exports as they were back then. It hasn't seen any attention in a long time, mostly because people have largely moved to using OpenCover exports instead.

I'm glad that you found the thread discussing issues with the NCrunch OpenCover exports and Sonar. I think the contents of this thread do much to describe the current situation with this export and Sonar, and it's unfortunate that these problems continue.

The problems you're encountering here have two root causes, neither of which we really have an effective solution for.

The first problem is that there is a clear lack of standardisation in these two export formats. Both of them are exports from old tools that saw widespread adoption many years ago, and everyone using them has had to more or less guess at what the various fields in them meant (which in most cases were quite obvious, but in hindsight may have held some unexpected edges). My approach to developing these exports at the time was simply to build a code sample, export it using the tool, then try to build an export system in NCrunch that could produce more or less the same result. The unfortunate consequence of this problem is that when a reader cannot read an export, there's no clear indication as to whether the reader is at fault or the export file is at fault.

The second issue is that there is an underlying structural difference in the data that NCrunch is exporting, compared with most other coverage tools. Most coverage tools work at statement level. For example, the following code:

Console.Write("xyz");Console.Write("123");

.. Consists of two logical statements and is represented with two sequence points in the PDB file. A tool like PartCover would collect test coverage for both of these sequence points, effectively considering them to be two lines in the export file.

NCrunch doesn't work this way, as it collects coverage data line-by-line. Under NCrunch, because the statements occupy the same visible line of code, the coverage is collapsed down into a single synthetic statement. This means that the coverage is actually misaligned with the internal structure of the DLL/PDB files that come from the compiler, and it creates problems when trying to map the data into tools that are designed specifically around compiled assembly structure. You could say that because NCrunch is interested in source lines, and Sonar is interested in IL/sequence-points, they simply aren't speaking the same language, even if they're trying to use the same words.

The coverage export system in NCrunch fudges the data as best it can, so that every covered line of code is represented, but if the reader is particular about some of the details, it's quite likely to kick up problems.

Why is NCrunch different? This comes down to decisions made in its architecture that prioritise the way in which it's generally used. NCrunch's features are only really interested in lines of code, because representing coverage in a more granular way isn't practical to do passively (few people want to see colour charts all over their code all the time while they work). Collection of coverage at sequence point level instead of source line level greatly increases the complexity of the coverage mapping and merging routines, which in turn degrades performance. The coverage mapping and merging system is one of the most complex areas of NCrunch and it's extremely heavily optimised. NCrunch is able to concurrently merge and analyse test data from over a hundred background runners concurrently using no more than a single thread and without any significant difference in latency. I can't safely say that such a result would be impossible with more granular collection, but I can say that it's not something I feel safe attempting.

So we don't really have a practical way to solve this problem.

I'll note that it's unlikely that using the grid node will have any impact here, unless for some reason your node is producing a different assembly structure to the one on your local machine (NCrunch will generally warn you if it sees this). It might be worth making sure the toolsets you're using between client and node are the same, just to be safe. Changing the DebugType isn't supposed to make much difference, but it can have an effect on the way the compiler handles sequence points, and Sonar might be particular about this.

Anyway, one advantage that we do have here is that the export files are human readable. If you're able to corner anything in your export files that you are certain is wrong and could be corrected on the side of NCrunch, I will do my best to fix it for you as much as the above constraints will allow me to. I have no relationship with Sonar and no experience in working with their product, but if it's raising errors about something that can reasonably be fixed and you can give me an example of what the file should look like, I'll do my best here.
MarcChu
#3 Posted : Friday, May 30, 2025 5:43:11 PM(UTC)
Rank: Member

Groups: Registered
Joined: 9/26/2014(UTC)
Posts: 24

Thanks: 3 times
Was thanked: 1 time(s) in 1 post(s)
Yes, I see the complications. NUnit seems to have been overtaken over the years by XUnit, and OpenCover is an archived project altogether. Unfortunately, we're pretty wedded to the former. As to the latter, since we're still on .Net Framework 4.8, this is the only thing that I could get (so far) to play nicely with our solution (until I turned to NCrunch). So, we're in legacy territory all around, and it's understandable that there wouldn't be a terribly high priority on fixing this.

I would ask, though (somewhat rhetorically): aren't both projects open-source, such that the report formats should not be so opaque?

At any rate, after your response, I did stumble upon the fact that SonarQube actually supports generic test data report format, with schema described here.

So, in lieu of dealing with the legacy complications already mentioned, I'll ask: what's the viability of introducing a feature that could generate the reports in this SonarQube-supported format? These reports are much simpler, certainly.

I suppose that's why NCrunch provides the raw results. I did take a quick glance at them. It certainly seems do-able that I could write a tool to convert these reports myself. I'm just not sure I want to broaden the scope of my work on this that much. I think that converting the raw data to the test execution format would probably not be too much of a lift. Same with the code coverage, though without the branch data.

This is all to say that, as far as what SonarQube takes (and gives us back--mostly just metrics), full OpenCover and NUnit results are probably overkill anyways, as the coverage granularity they analyze seems rather congruent with that of NCrunch. They're perhaps only parsing a small subset of data from these reports. So I can see a way forward to satisfying that aspect of my pipeline without extreme effort. I'm simultaneously engaging in some support with SonarQube to determine how much I might need to lean on actual NUnit and OpenCover reports, if at all. We can return to this aspect later, depending on what they tell me.

However, despite their projects' legacy status, NUnit and OpenCover reports are still widely supported (de-facto standard, even), and other tooling in my pipeline will attempt to parse them accordingly.

If necessary, in just comparing a newer NUnit v3 report with the one that NCrunch produces, it again seems doable that I could produce the proper output myself. This is just a matter of how much time and effort I have to expend on this.

The OpenCover report is more concerning, as the output just seems wrong. In the example I gave above, every sequence point for every method in the above class is pointing to fileid="900", which is incorrect (uid="382" for that file). I haven't looked at any of the other files that triggered this error, but I would imagine I'd see the same sort of thing for them. For this particular error, at least, I don't think the problem has anything to do with SonarQube (their error message correctly diagnoses the problem).

So there's something buggy happening here, and I'd like to figure out what and why, but I really have no idea where to start. I did look at the RawCoverageResults.xml to see what data was there, and it's pretty sparse. So I'm guessing that's where NCrunch goes about inspecting .pdb files--to generate that 95% of the OpenCover.xml. Anything having to do with the structure and content of .pdb files is beyond me. This is why the portable DebugType seemed like a potential solution. But I wanted to check here first, because I haven't found great information on what that actually does, or whether it's even valid for .Net Framework (as opposed to Core).

I know that, in the past (before a laptop upgrade), I've seen warnings in my IDE about assemblies between grid nodes having different IL (or something). Interestingly, in trying to look for such warnings now, I see that every project is showing "NCrunch: This project was built on server '<Server>' for only a single server. In the past, I definitely recall it displaying all my servers. So, is there any place on the build server that I'd see warnings like this?

Any other suggestions?
Remco
#4 Posted : Friday, May 30, 2025 11:52:18 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,298

Thanks: 987 times
Was thanked: 1325 time(s) in 1228 post(s)
MarcChu;18164 wrote:

At any rate, after your response, I did stumble upon the fact that SonarQube actually supports generic test data report format, with schema described here.

So, in lieu of dealing with the legacy complications already mentioned, I'll ask: what's the viability of introducing a feature that could generate the reports in this SonarQube-supported format? These reports are much simpler, certainly.

I suppose that's why NCrunch provides the raw results. I did take a quick glance at them. It certainly seems do-able that I could write a tool to convert these reports myself. I'm just not sure I want to broaden the scope of my work on this that much. I think that converting the raw data to the test execution format would probably not be too much of a lift. Same with the code coverage, though without the branch data.


Likewise, I also have constraints in how much time can be spent on certain things. Something you'll have noticed is that the legacy situation with these export files hasn't received a lot of attention in these forums. I don't think many people are actually using them. The SonarQube format looks fairly simple in principle, but introducing support for it would also require setting up a relationship with them so that I could test the import into their system to make sure it works correctly. It would also then imply monitoring the format to adapt to any changes, and providing support for when it fails. Really, we're not talking about just dumping out data - it would be systems integration. To justify another integration point, I would need to see more demand for it.

As you've identified, the NCrunch raw results are in a format that is specific to NCrunch and no other tool. In earlier versions of NCrunch, this was the only export option available. The others were added only because people asked for them to be there. The original idea was that NCrunch could export the data it had, and others could then use this as they saw fit (by transforming it into other formats as necessary).

MarcChu;18164 wrote:

The OpenCover report is more concerning, as the output just seems wrong. In the example I gave above, every sequence point for every method in the above class is pointing to fileid="900", which is incorrect (uid="382" for that file). I haven't looked at any of the other files that triggered this error, but I would imagine I'd see the same sort of thing for them. For this particular error, at least, I don't think the problem has anything to do with SonarQube (their error message correctly diagnoses the problem).

So there's something buggy happening here, and I'd like to figure out what and why, but I really have no idea where to start. I did look at the RawCoverageResults.xml to see what data was there, and it's pretty sparse. So I'm guessing that's where NCrunch goes about inspecting .pdb files--to generate that 95% of the OpenCover.xml. Anything having to do with the structure and content of .pdb files is beyond me. This is why the portable DebugType seemed like a potential solution. But I wanted to check here first, because I haven't found great information on what that actually does, or whether it's even valid for .Net Framework (as opposed to Core).


I'm just done some further testing with the OpenCover exports in NCrunch, and I haven't been able to produce this problem. There much be something specific to your scenario that's triggering it. Are you able to produce a sample solution that can surface the problem? If so, please submit it in ZIP form through the support contact form and I'll take a closer look.

MarcChu;18164 wrote:

I know that, in the past (before a laptop upgrade), I've seen warnings in my IDE about assemblies between grid nodes having different IL (or something). Interestingly, in trying to look for such warnings now, I see that every project is showing "NCrunch: This project was built on server '<Server>' for only a single server. In the past, I definitely recall it displaying all my servers. So, is there any place on the build server that I'd see warnings like this?


The data in the Tests Window is shown according to the first node that was able to build the project. To see the other build results, you can use the Processing Queue Window, which reports on all build activity from across the grid. To get more detailed build results in this window, you'll need to adjust your logging settings.

If your toolsets are updated asymmetrically between your nodes/client, you'll sometimes see the IL signature warning. Generally, you only need to be concerned if it is persistent and doesn't go away. Such a situation usually means that the toolsets are inconsistent and need to be aligned/fixed manually.
Users browsing this topic
Guest, MarcChu
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.109 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download