Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

In Test window, Server column for grouping rows doesn't correctly reflect child tests.
Grendil
#1 Posted : Thursday, June 29, 2017 5:57:46 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 3/18/2017(UTC)
Posts: 54
Location: United States of America

Thanks: 22 times
Was thanked: 11 time(s) in 10 post(s)
This is maybe both a feature request and a minor bug report. For the feature request aspect, I've created a user voice ticket. But for the bug aspect, I thought I would elevate it here.

I use the Tests window with the Project grouping. I've got a couple node servers I'm playing with. In my case Server column in Tests always says "(Local)" if any child test ran locally, despite some children running on remote nodes. I think Server should be a comma-delimited list like Category. Next best thing would be leaving it blank on grouping rows.

If I'm filtering for failed tests only, the current behavior is really a bug, because all of the failing tests may have ran on one node server that isn't (local), yet the grouping row alone still says (local). So it makes scanning the server column to troubleshoot issues (in my current case a node server dependency config issue) confusing. For a while I thought "I see both local and remote nodes in the failing tests, so this must be a broad issue" when really it was specific to one node server.

Again, it's just a minor thing.
Remco
#2 Posted : Thursday, June 29, 2017 10:47:24 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,993

Thanks: 932 times
Was thanked: 1258 time(s) in 1171 post(s)
Hi, thanks for sharing this.

There is actually some relevance to this column, though in most scenarios this won't be obvious at all.

When distributing builds, NCrunch will run a build on every machine in the grid. Most of the results that are reported in the Tests Window for each project (i.e. the build time, when it was run, and the server) are obtained from the first build result returned by the grid for the project. The first result to return from the grid is considered 'primary' and has a special status in the engine, because it is used to obtain critical things like the list of tests in the assembly, IL hashes used for impact detection, etc. The rest of the results are mostly discarded with the exception of their build errors, as these can indicate an alignment issue between the nodes.

If you're running a reasonably fast machine compared with your grid nodes, most of the time the local node will respond first, which is why you always see (local). Probably this doesn't translate well to the tree structure, but then the tree itself is a massive oversimplification of the results you're seeing from the engine anyway so there will always be some rough edges on it.

I'm not really sure if there is a better way to handle the server reporting for build steps. Making it a delimited list can cause other problems for people using large grids. You can imagine what the column would look like if you had 30 machines in your grid. Leaving it blank or filled by a placeholder seems to deprive you of relevant information, as there would be no way to establish which server the other column values have come from. The 'Last Execution Time' value is still of some importance and this is only really relevant when you know which server was used to execute the build. It really doesn't seem like we have a perfect answer here.
Grendil
#3 Posted : Friday, June 30, 2017 12:17:05 AM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 3/18/2017(UTC)
Posts: 54
Location: United States of America

Thanks: 22 times
Was thanked: 11 time(s) in 10 post(s)
Not sure if my last, long reply went through, I think the forum software ate it, probably for the best. :)

The short version is that I think the UI tree views would be more intuitive if columns on grouping rows always showed direct aggregation functions over the same column in the underlying children rows. I also think Server is a general name so I would expect it to be about the most general point of interest, like where the individual pass/fail result I'm seeing was served from. So I'd maybe create a new column like "Execution Times Shown For This Fastest Server" to make it obvious that's what we're seeing. In Server column, I'd keep it about where the pass/fail came from. On grouping rows, I'd comma concatenate up to 50 or 100 chars of server names, and if I exceed that I'd just say "(Many)" instead. This would keep that column useful for a lot of cases when you're trying to understand pass/fail differences in a distributed scenario, especially if you're looking at different pass/fail outcomes on different nodes.

These may be short-sighted ideas. But sometimes maybe it's hard to see a complex UI from the perspective of a general user when you're intimately familiar with its underpinnings.
Remco
#4 Posted : Friday, June 30, 2017 12:38:32 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 6,993

Thanks: 932 times
Was thanked: 1258 time(s) in 1171 post(s)
Grendil;10726 wrote:
Not sure if my last, long reply went through, I think the forum software ate it, probably for the best. :)


Ouch, sorry :(

Grendil;10726 wrote:

The short version is that I think the UI tree views would be more intuitive if columns on grouping rows always showed direct aggregation functions over the same column in the underlying children rows. I also think Server is a general name so I would expect it to be about the most general point of interest, like where the individual pass/fail result I'm seeing was served from. So I'd maybe create a new column like "Execution Times Shown For This Fastest Server" to make it obvious that's what we're seeing. In Server column, I'd keep it about where the pass/fail came from. On grouping rows, I'd comma concatenate up to 50 or 100 chars of server names, and if I exceed that I'd just say "(Many)" instead. This would keep that column useful for a lot of cases when you're trying to understand pass/fail differences in a distributed scenario, especially if you're looking at different pass/fail outcomes on different nodes.

These may be short-sighted ideas. But sometimes maybe it's hard to see a complex UI from the perspective of a general user when you're intimately familiar with its underpinnings.


The key problem from an implementation (and actually UX) standpoint here is that the project row in the tree represents two different things:
1. The results specific to a project itself (i.e. its build and analysis steps)
2. Aggregated results from the tests and fixtures under this project

This problem also exists for the fixture rows. They represent both aggregated results from their child tests, as well as the results of the parent fixture, which has its own trace output and its own pass/fail result.

To take the standpoint that each row in the tree should be a clear aggregation of its child contents would be to lose critical information relevant to the row itself, which is more than just a grouping of child elements. For example, if we treated a project row as being merely a container of fixtures and tests, then there would be no where for us to report build results.

Of course it would be possible to build a separate UI structure that could show this data and leave the tree to aggregations, but then we'd have a whole new UI to manage and track. The nice thing about the Tests Window is that it gives you a full view of everything that would normally be relevant to someone in a standard NCrunch use case.

In my opinion, changing the Server row to be an aggregation would take it out of form with the rest of the data shown on the project rows, and this would make the view much more confusing. I accept that this might work better for your use case, but in the current design, all data shown for the project (with the exception of the icon/status) is derived directly from the primary build and analysis result with no aggregation performed. As soon as we start introducing selective behaviour for individual columns, all the consistency is gone.

For your use case, it may be better to group the Tests Window by Test instead, as the projects themselves probably aren't something you are interested in. In this manner, you can sort the individual test results by Server and it should give a very clear picture of pass/fail distribution by grid node. An alternative is also to export the results to CSV where you can easily sift through them in a spreadsheet.
Grendil
#5 Posted : Friday, June 30, 2017 9:43:21 PM(UTC)
Rank: Advanced Member

Groups: Registered
Joined: 3/18/2017(UTC)
Posts: 54
Location: United States of America

Thanks: 22 times
Was thanked: 11 time(s) in 10 post(s)
Thanks for thoughtfully considering my concerns. I'll try the workarounds you suggested.

I think my personal preference (and maybe this is greenfield thinking) would still be to separate the build statistics from the test results information, and not try to make it all fit in the same view. I personally find myself studying those things in different work flows, and find the current combination to be less convenient. For example, I'd like not have all the successful build steps cluttering my tests window when I choose to include green tests in my project-based view. (use case: I sometimes include the green tests to create a fixed layout view of my test suite (regardless of pass/fail status), but I don't also want to see that our 100+ projects are building in that same context.)

Please take all this feedback in the positive spirit in which it's intended. I still really love NCrunch!
1 user thanked Grendil for this useful post.
Remco on 6/30/2017(UTC)
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.055 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download