Grendil;10726 wrote:Not sure if my last, long reply went through, I think the forum software ate it, probably for the best. :)
Ouch, sorry :(
Grendil;10726 wrote:
The short version is that I think the UI tree views would be more intuitive if columns on grouping rows always showed direct aggregation functions over the same column in the underlying children rows. I also think Server is a general name so I would expect it to be about the most general point of interest, like where the individual pass/fail result I'm seeing was served from. So I'd maybe create a new column like "Execution Times Shown For This Fastest Server" to make it obvious that's what we're seeing. In Server column, I'd keep it about where the pass/fail came from. On grouping rows, I'd comma concatenate up to 50 or 100 chars of server names, and if I exceed that I'd just say "(Many)" instead. This would keep that column useful for a lot of cases when you're trying to understand pass/fail differences in a distributed scenario, especially if you're looking at different pass/fail outcomes on different nodes.
These may be short-sighted ideas. But sometimes maybe it's hard to see a complex UI from the perspective of a general user when you're intimately familiar with its underpinnings.
The key problem from an implementation (and actually UX) standpoint here is that the project row in the tree represents two different things:
1. The results specific to a project itself (i.e. its build and analysis steps)
2. Aggregated results from the tests and fixtures under this project
This problem also exists for the fixture rows. They represent both aggregated results from their child tests, as well as the results of the parent fixture, which has its own trace output and its own pass/fail result.
To take the standpoint that each row in the tree should be a clear aggregation of its child contents would be to lose critical information relevant to the row itself, which is more than just a grouping of child elements. For example, if we treated a project row as being merely a container of fixtures and tests, then there would be no where for us to report build results.
Of course it would be possible to build a separate UI structure that could show this data and leave the tree to aggregations, but then we'd have a whole new UI to manage and track. The nice thing about the Tests Window is that it gives you a full view of everything that would normally be relevant to someone in a standard NCrunch use case.
In my opinion, changing the Server row to be an aggregation would take it out of form with the rest of the data shown on the project rows, and this would make the view much more confusing. I accept that this might work better for your use case, but in the current design, all data shown for the project (with the exception of the icon/status) is derived directly from the primary build and analysis result with no aggregation performed. As soon as we start introducing selective behaviour for individual columns, all the consistency is gone.
For your use case, it may be better to group the Tests Window by Test instead, as the projects themselves probably aren't something you are interested in. In this manner, you can sort the individual test results by Server and it should give a very clear picture of pass/fail distribution by grid node. An alternative is also to export the results to CSV where you can easily sift through them in a spreadsheet.