Hi, thanks for posting.
Yes, this is by design. When you share source files between projects, those source files are duplicated internally by NCrunch so that a fully new copy exists for each project. This is the only way such code can make sense, because code compiled in a different project can be physically completely different (for example, through the use of #if or ConditionalAttribute). The MS tool stack also does this under the hood. When the code coverage is presented in the code window, the separate results between the projects are internally merged into a single overlapping result set, because the VS code windows give a 'flat' view over all files shared between the projects.
The dimensions of the metrics window are different to the code windows, because these follow a hierarchical structure that is broken down by project. It doesn't make logical sense for a project to show code coverage that is related to a different project, so you get the 70% and 90% difference.
The idea of code coverage starts to become more subjective when applied in a multi-platform or cross-compiled scenario, because even though code may be 'covered' when tested under one platform/configuration, it may not always be safe to assume it is also 'covered' under a different one. For example, if you were to test your code only in one platform but compiled it for two, there is a risk of platform specific issues could get past your testing and show up in a production environment. These sorts of scenarios emphasise the limitations of using code coverage to determine overall test quality.