Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

Project built system doesn’t seem to respect build-to-order?
WadeHatler
#1 Posted : Friday, May 9, 2014 4:39:40 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/9/2014(UTC)
Posts: 13
Location: United States of America

Thanks: 2 times
Was thanked: 1 time(s) in 1 post(s)
I just started trying to use in crunch yesterday, and I’m running into a problem with builds. I have a moderate sized solution with about a dozen or so projects, all of which use the same Output Folder. This is necessary because I’m doing a lot of mixed-mode managed/unmanaged multi-language stuff (both COM and PInvoke calls between managed and unmanaged code in both directions), and they all need to be in the same folder to work correctly. I've been doing this for years, and I know it's generally not considered a great practice that is always worked fine.

One of the projects is a library, which is shared via setting a Reference To A Project in the solution. With both MSBuild and .NET Demon, this instructs the build system about the build dependency order, and it builds correctly most of not all of the time.

I have disabled .NET Demon while testing NCrunch, so only NCrunch is responsible for the build. It pretty routinely fails when I make changes to the library with the message:
Quote:
..\..\..\Program Files (x86)\MSBuild\12.0\bin\Microsoft.Common.CurrentVersion.targets (3528): Could not copy "obj\Debug\Palabra.Library.dll" to "C:\Palabra\Bin\Palabra.Library.dll". Exceeded retry count of 10. Failed.
..\..\..\Program Files (x86)\MSBuild\12.0\bin\Microsoft.Common.CurrentVersion.targets (3528): Unable to copy file "obj\Debug\Palabra.Library.dll" to "C:\Palabra\Bin\Palabra.Library.dll". The process cannot access the file 'C:\Palabra\Bin\Palabra.Library.dll' because it is being used by another process.


If I use SysInternals HANDLE utility to see who has to file open, turns out to be the test runner itself:
Quote:
nCrunch.TaskRunner40.x86.exe pid: 2356 42C: C:\Palabra\Bin\Palabra.Library.dll


if I close the solution and reopen it, then it seems to build most of the time.

is there any type of configuration switch or something like that I can use to resolve this?
Remco
#2 Posted : Friday, May 9, 2014 11:43:16 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi,

Judging from the file path involved ("C:\Palabra\Bin"), it looks to me as though this is being caused by the code that's being compiled/tested by NCrunch reaching outside the NCrunch workspace into your foreground solution. Am I correct in assuming that this is being done by setting the 'OutputPath' project property to an absolute path?

If so, such a setup is certain to give you problems with background build tools such as NCrunch. NCrunch works by copying your project files into a separated sandbox on your disk, then building them in isolation. If the project has been hard coded with an absolute output path, it will be trying to dump its output files in the same place as projects in the foreground solution (a bad thing).

The best way to solve this problem is to introduce an override into each of the projects involved, so that NCrunch will use a different OutputPath.

Underneath the project file, you'll see a declaration like this:

<OutputPath>c:\Palabra\Bin</OutputPath>

Replace this line with the following:

<OutputPath Condition="$(NCrunch) != '1'">c:\Palabra\Bin</OutputPath>
<OutputPath Condition="$(NCrunch) == '1'">Bin</OutputPath>

I hope this will do the trick!


Cheers,

Remco
1 user thanked Remco for this useful post.
WadeHatler on 5/16/2014(UTC)
WadeHatler
#3 Posted : Friday, May 16, 2014 3:48:01 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/9/2014(UTC)
Posts: 13
Location: United States of America

Thanks: 2 times
Was thanked: 1 time(s) in 1 post(s)
Thanks, I finally got it to work, although in the end it took an enormous amount of changes. My project is unusual, and I had to do several things that weren't covered in the documents, but I finally got it to work more or less and so now I can actually figure out if I like the tool are not.

I have one other question. Most of the stuff that I'm doing is client/server. in these cases, most of my tests are not unit tests, but more like integration tests. I usually don't bother writing in unit tests that run inside of the server, because I am more interested in testing end to end. Is there any way to force the coverage analysis to include modules that are not actually loaded in the process space of the test runner? I suspect the answer is no(how would the test runner even know what to analyze), but I thought I would ask anyway.
Remco
#4 Posted : Friday, May 16, 2014 10:36:25 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi, sorry that this took so much of your time. Large customised setups can have a price tag attached to an NCrunch implementation. We try to make this easy .. but it isn't always easy to foresee the state of projects people are working on :(

NCrunch is quite well suited to integration testing, and in my belief the parallel processing features can allow it to add more value as an integration testing tool than as a unit testing tool.

It is an unfortunate limitation that at the moment, code coverage data can only be tracked within the test's primary application domain.

There are some plans to change this. In theory, it's possible to set an environment variable with a 'return address' that can allow further code under test to call back to the test process and inform it of code coverage. It's complicated, but I'd like to see it work. In your case, are the satellite processes launched by your code under test? And are their binaries derived from projects within your solution? If so, this may be a useful future feature for you.
WadeHatler
#5 Posted : Saturday, May 17, 2014 4:20:22 AM(UTC)
Rank: Member

Groups: Registered
Joined: 5/9/2014(UTC)
Posts: 13
Location: United States of America

Thanks: 2 times
Was thanked: 1 time(s) in 1 post(s)
Fair warning: I'm in the speech recognition business, so I'm probably going to give you a longer answer than you really want – but I'll preface it by saying that things are working reasonably well now, so you don't necessarily need to do anything other than maybe throw out a suggestion or two if you have them.

My project structure and operation are both fairly unusual, although it's not a particularly big project. There are a total of 26 projects in the solution, most of which only have 5-10 source files. I keep it all in one big solution just because it makes it easier to do refactoring. I can easily split it in half, but I've never had any real reason to do so. I currently have about 20% coverage of the code, so there are not a ton of tests. About 3/4 of the tests I do have would be considered slow in one form or another. In fact, by most definitions most of my tests would be integration testing, not unit testing - mainly because my application is a giant integration engine.

The project is a little unusual because the .NET portions are part of a bigger project that automates another application. This means that I have a ton of interaction between managed and unmanaged code. I use COM, TCP, P/Invoke, DLL Injection, and a few other tricks to accomplish these tasks. The .NET portion is stuck in the middle of a bunch of other assets, written in several other languages. In the field, almost none of the .NET code is loaded directly by .NET. Most of it is loaded into one type of unmanaged application or another, mostly COM based.

The end result of all that for the build issues we discussed is that for various reasons I have to have most of my output in the same folder, and in many cases I have to have add-ins registered from the folder. I could probably get away with moving some of the .NET code off into another folder, but is complicated enough that having some of it and the "main" folder, and the rest of it scattered other places didn't add much value. About the only real problem caused by having everything go to the same output folder is that it causes background build processes like yours (and to a lesser extent .NET Demon) to behave badly sometimes. At any rate, I completely restructured the project so that everything goes into its own folder during the build, and then the things that need to be in a particular place are moved with post-build steps. I also have some pre-build steps, but these are mostly just to make sure that any applications that might interfere with the build shut down when I start building an affected project. Additionally, some of the assets have to be in one place for the test runner, and in a different location for production, but I'm trying to reduce as much of that as I came.

My application has assets in a whole bunch of different places, using several different forms of IPC to communicate between them. For example I have quite a bit of code in a VSTO Addin, which is hosted inside of an AppDomain that's created by Microsoft Word via a COM interface. To add insult to injury, Microsoft Word is actually created using out of process COM automation by a third application, and there is a fourth application that's acting as an orchestrator. I even have an IPC channel between the VSTO Addin, and another .NET assembly that is hosted inside the main application using DLL injection. I communicate between all of these pieces using various types of IPC, depending on context.

Too Much Detail - I got off the rails a bit, but let me get back to the your last question, regarding test coverage in external applications. I actually don't just have a problem with test coverage, but I also have a problem just getting many integration tests to work correctly at all – but I haven't struggled with it enough to ask for help yet.

As one example, I would like to be able to test the code in VSTO add-ins, but it's tough to do with the test runner because the test runner always wants to start its own process. There is a certain amount of code in the Addin that doesn't care where it is because it's not interacting directly with Word, so I can test that code by simply loading the Addin from the test runner, and calling those components directly. At the moment, that probably accounts for about half of my code base in the Addin, and testing it is no big deal, as long as I'm a little careful to watch for side effects from being hosted differently than normal. Over time, there will be more and more code that fits into this category. I expect the code that interacts directly with Microsoft Word to more or less double over the next six months, and the rest of the code to increase by a factor of 5-10.

Where it gets tricky is if I want to test the parts that actually interact with Word. At the moment, about one third of my code actually has some kind of meaningful interaction with Word that is difficult to mock or simulate. You would think I would be able to mock it with ease, but the Word Object Model is so nonsensical and inconsistent, a lot of things and don't work the way you would expect and the only way to be sure is to actually try it in place. This means ideally I would like to have all of the integration tests actually work inside of Word just like they would in production.

The only way I can think to handle this across process boundary is to either:
  1. Create an IPC channel specifically for communicating with the tests, and always run the test engine out of process from the actual tests

  2. Somehow host the test engine directly in Word, and just put up with the fact that the test tool isn't as nice as the one you have in the IDE


I haven't done any real experimentation around the second option, but it seems like it's probably what I will have to do for real integration testing. I still have some experimentation to do, so it's too early to tell what's going to work.

So to get back to your question, my particular code base has very little code that does anything complicated internally, and a lot of code that communicates between different processes. It's very rare for me to have more than a dozen lines of code that do any kind of calculation without having to go to some other process for some reason. That means this cross-process testing is something that would add a lot of value to me, but I don't know how much it would add for people doing normal projects. Most of the code that I want test is actually on the other side of a process boundary most of the time.

So anyhow, thanks for letting me rant. I don't actually expect you to find an answer for such an unusual situation, but if you have any ideas I'd be happy to try them out. For the moment, I'm not worrying so much about indications of code coverage, but I'm really just trying to get all of my testing to just work, and I haven't even gotten several of the things I want to test like VSTO integeration testing to work at all with any test runner. Hopefully, next week I'll have better news.

For the moment, I'd say NCrunch is doing what it needs to do.

Remco
#6 Posted : Saturday, May 17, 2014 6:07:16 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
I find it quite remarkable how similar your situation is to the NCrunch codebase itself. Consider that NCrunch itself is hosted inside Visual Studio (which like MS Word, is a largely unmanaged, COM heavy, aging beast). As with NCrunch, the most valuable tests in your situation involve the interaction of many different components and processes communicating in different ways. You haven't elaborated much on whether you support many different platforms (i.e. different versions of Word), but I expect that over time this might become a consideration for you?

On this basis, I can probably provide quite a bit of advice around ways to structure your test suites to make them work well with continuous testing. The NCrunch codebase itself is 100% continuous over itself, fully paralleled and cross platform. It's also quite big, and tests continuously across all major versions of Windows and the last 4 versions of Visual Studio. With a bit of investment, I see no reason why the same thing cannot be achieved for any other project.

Conventional testing wisdom would suggest trying to find ways to decouple from the hosting process (MS Word) as much as possible to ensure faster and easier testability. In most cases I would agree with this, but as most of your logic is around integration, I would instead suggest investing as much as possible in the infrastructure around how MS Word is hosted and your code is loaded into this host as part of a test run. Sure, the tests will be slower, and for the time being you might need to live without full code coverage tracking, but it'll be worth it for the certainty your tests will give you while you perform integration work. Testing integration work manually is VERY expensive and will put your productivity through the floor. The sooner you can get the interaction with MS Word operating continuously, the sooner you will save much time and frustration.

My knowledge of how VSTO Addins are orchestrated by MS Word is very much lacking, but if at all possible, I would recommend against the second option you've suggested of hosting the test engine directly in word. This will pin your entire infrastructure down to only what MS Word can provide to you, and it will make it difficult or impossible to control your test environment or integrate cleanly with NCrunch.

If at all possible, you want a test infrastructure that can:
1. Assemble all required components (exes, binaries, resource files, etc) into a directory structure that closely resembles the runtime application when it is installed on the user's machine
2. Load/install the VSTO into MS Word
3. Establish an IPC or TCP connection from the testing process into MS Word
4. Drive the code/automation inside MS Word, returning results over the connection to the test process
5. Analyse the results, perform assertions
6. Close down MS Word cleanly

I'm not entirely sure about how you're currently handling the VSTO installation or if its possible to isolate a VSTO addin to a single instance of word. If you can do this, then it's worth rigging up these tests so they can run in parallel. This will serve to mitigate the loss in performance associated with having so much setup involved. If this proves to be too difficult to do, you might want to look into creating a set of virtual machines that run various different versions of MS Word. The virtual machines can be launched programmatically by your tests and can be communicated with using a custom rig-up over TCP. Assuming you have enough memory available on your host machine, you could have several different VMs running side-by-side testing your code against multiple versions of word fully continuously.

There are a couple of tricks you'll need to use to assemble your components into a sandbox so they can operate in a live-like environment. This is because NCrunch normally constructs its own test process by wiring your assemblies together from different directories, which works great for unit testing, but not so well when you need to have the assemblies in a consistent structure relative to each other. This can be achieved using the Implicit Project Dependencies configuration setting in combination with methods available on the NCrunch environment class. You can use the information that NCrunch makes available inside the test process to find all the components needed in your sandbox, then copy them into the sandbox manually.

Note that by making use of the above two tricks, your tests will cease to work without NCrunch unless you implement alternative code that can work without it. This normally isn't difficult to do, depending upon how your build system works.

Where possible, you want your build to output files using relative paths into a location that is fixed according to your solution file. As you've described, life is much easier when they all land in the same place. I recommend using something like this in your project files:

<OutputPath>$(SolutionDir)\BuildOutputs</OutputPath>

... This will also work with NCrunch, as NCrunch 'simulates' a solution inside the workspace directory structure created for each project when it performs a build. I'd recommend avoiding copying the assemblies around using post build steps, as this can create confusion around which assemblies are updated and where the mostly recently built ones exist. They're also more complicated and are highly dependent on your solution structure.

If you have additional hardware available, or the opportunity to lease something from a cloud provider, Distributed Processing is very much worth looking into when dealing with slow running integration tests.

I hope some of these tricks help you and that they don't require too much work from your side. If it helps, I can say that such a setup will always pay for itself in the long haul on any project. There's nothing quite like knowing your whole application works within a couple of minutes of any change, especially with such an integration-heavy codebase.
1 user thanked Remco for this useful post.
WadeHatler on 5/19/2014(UTC)
WadeHatler
#7 Posted : Monday, May 19, 2014 5:32:01 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/9/2014(UTC)
Posts: 13
Location: United States of America

Thanks: 2 times
Was thanked: 1 time(s) in 1 post(s)
Thanks for the detailed reply. I found it very helpful, it's got mevery convinced that this is the right tool once I get everything setup and working right.

You are correct that our two solutions are remarkably similar. I guess the other thing that goes without saying is that we will be dealing with this same aging beast for years or decades to come.

I read up on your grid support, which seems like the perfect answer to me. I don't really mind that UI tests take so long, just that they kill my machine while they're running. My application is a little bit unusual in that it's about 90% automation, and 10% business logic, so a lot of my tests will have to be done in-place. I'm trying to figure out the right mix of unit tests versus integration tests, although the difference is a little bit subtle in this case.

I looked at your multiple configuration settings, and they seem very flexible and like they should be able to allow me to run fast tests on my machine, and delegates load tests the network so I'm to play around with it.

I'm going to try setting up a remote on a virtual machine today, and see if I can get everything to work correctly. It seems like the perfect solution, so thanks for your help.
WadeHatler
#8 Posted : Monday, May 19, 2014 5:36:01 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/9/2014(UTC)
Posts: 13
Location: United States of America

Thanks: 2 times
Was thanked: 1 time(s) in 1 post(s)
One last question. Do you have any recommendations on which unit test framework to use? I've been using xUnit.net which is pretty crappy documentation, that seems to be a bit of a step up from NUnit. I thought about switching to NUnit just to get better docs, but haven't bothered. Based on your experience is any framework particularly good, especially when it comes to working well with NCrunch?

Thanks,
Wade Hatler
Remco
#9 Posted : Monday, May 19, 2014 9:29:35 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
I think your choice in framework carries a great deal of personal preference. Each of the 5 major frameworks used by NCrunch have their strong points. Generally I'd suggest that if you're already using a test framework that you're familiar with, then just stick with it.

Xunit isn't a bad test framework and in terms of development it has a lot going for it right now. It's nearing a V2 release and NCrunch's integration with Xunit V2 is direct - meaning improved performance and compatibility.

My personal reasons for not using Xunit are because I prefer a BDD-like structure in which fixtures represent test cases, and tests themselves represent assertions. Xunit doesn't really work with this approach because each test method call involves the construction of a new fixture. In my understanding, this was a deliberate decision made by the developers to try and reduce the fragmentation of the test code.

Something I do usually try to recommend is to avoid making use of specialised features within test frameworks - unless you have a very good reason for using them. Features such as NUnit's TestCaseSource, parameterized test fixtures, even Xunit theories are generally considered a bit more 'niche' and compatibility for them is a bit less widespread. NCrunch has been progressively hammered into supporting these features over the years, but ideally you don't want to tie your code to specific tools (famous last words as a tool developer) as this can be an unexpected pain point if your development environment changes in future. In most cases, it's possible to express tests using only the simple 'Test'/'Fact'/'TestMethod' attributes, then use inheritance or multiple test entry points if you need to call into tests multiple times. This can also make your tests much easier to understand if new developers join the project without much experience with the framework being used.
WadeHatler
#10 Posted : Monday, May 19, 2014 9:45:42 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/9/2014(UTC)
Posts: 13
Location: United States of America

Thanks: 2 times
Was thanked: 1 time(s) in 1 post(s)
Which BDD tool to you like? I have so few tests at this point, and I like NCrunch so much I'm pretty inclined to just take your word for it and adopt it, and I do like the BDD approach. I've read a lot about it, but never put it into practice, so now is as good a time as any.
Remco
#11 Posted : Tuesday, May 20, 2014 12:38:59 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
I don't really use a specific tool for BDD myself - I just kind of took what I liked from its approach and applied this in my daily work. I don't think it's something that needs specialised tooling to work with, although NCrunch does support MSpec and SpecFlow, both of which I believe were built with BDD in mind.

I recommend reading up on it a bit more, ideally from content written by Dan North. He's a very pragmatic guy and as he invented the term, he understands its definition better than anyone. Dan himself often says that many people he talks to tend to describe BDD as simply 'TDD done right'.
lukesch
#12 Posted : Tuesday, January 13, 2015 10:31:29 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/10/2014(UTC)
Posts: 13

Thanks: 1 times
Was thanked: 1 time(s) in 1 post(s)
I'm running into a very similar issue and cannot figure out the right way to fix it. When NCrunch boots, it builds and runs tests fine. As soon as I make a change, it fails to build saying

"D:\Enlistments\Bothell\tmp\NCrunchBuild\ProcessWixObjTests\target\ProcessWixObj.tests.dll". Exceeded retry count of 10. Failed.
C:\Program Files (x86)\MSBuild\12.0\bin\Microsoft.Common.CurrentVersion.targets: Unable to copy file "objd\amd64\ProcessWixObj.tests.dll" to "D:\Enlistments\Bothell\tmp\NCrunchBuild\ProcessWixObjTests\target\ProcessWixObj.tests.dll". The process cannot access the file 'D:\Enlistments\Bothell\tmp\NCrunchBuild\ProcessWixObjTests\target\ProcessWixObj.tests.dll' because it is being used by another process.

Running Handle.exe, I see that NCrunch is the one holding the lock on the file.

Handle v4.0
Copyright (C) 1997-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

nCrunch.TaskRunner45.x86.exe pid: 32516 type: File 350: D:\Enlistments\Bothell\tmp\NCrunchBuild\P
rocessWixObjTests\target\ProcessWixObj.tests.dll

I can rerun Handle.exe ad-naseum and nCrunch.TaskRunner will never drop the lock. We have OutDir and OutputPath set up as follows:

<OutDir>$(TARGETDIR)\</OutDir>
<OutputPath>$(TARGETDIR)\</OutputPath>
<OutDir Condition="'$(NCrunch)' == '1'">$(TEMP)\NCrunchBuild\$(MSBuildProjectName)\target\</OutDir>
<OutputPath Condition="'$(NCrunch)' == '1'">$(TEMP)\NCrunchBuild\$(MSBuildProjectName)\target\</OutputPath>

Based on what I see, it seems that NCrunch is fighting with itself. I don't know how to fix this. Any suggestions?
lukesch
#13 Posted : Tuesday, January 13, 2015 10:42:43 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/10/2014(UTC)
Posts: 13

Thanks: 1 times
Was thanked: 1 time(s) in 1 post(s)
I switched OutDir to a relative path and it no longer fails to build -- NCrunch appears to pick a new temp dir each time it builds. However, it seems that the taskrunner now accumulates handles to these test dlls. Is it possible ncrunch isn't properly disposing the handles?

<OutDir Condition="'$(NCrunch)' == '1'">NCrunchBuild\$(MSBuildProjectName)\target\</OutDir>
<OutputPath Condition="'$(NCrunch)' == '1'">NCrunchBuild\$(MSBuildProjectName)\target\</OutputPath>

Handle v4.0
Copyright (C) 1997-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

nCrunch.TaskRunner45.x86.exe pid: 22968 type: File lukesch 364: D:\Enlistments\Bothell\tmp\NCrunchBuild\P
rocessWixObjTests\target\ProcessWixObj.tests.dll
nCrunch.TaskRunner45.x86.exe pid: 17020 type: File lukesch 368: C:\Windows\Temp\NCrunch\24280\17\sources\
test\BuildTools\src\ProcessWixObj\NCrunchBuild\ProcessWixObjTests\target\ProcessWixObj.tests.dll
nCrunch.TaskRunner45.x86.exe pid: 26788 type: File lukesch 360: C:\Windows\Temp\NCrunch\24280\18\sources\
test\BuildTools\src\ProcessWixObj\NCrunchBuild\ProcessWixObjTests\target\ProcessWixObj.tests.dll
nCrunch.TaskRunner45.x86.exe pid: 35172 type: File lukesch 368: C:\Windows\Temp\NCrunch\24280\20\sources\
test\BuildTools\src\ProcessWixObj\NCrunchBuild\ProcessWixObjTests\target\ProcessWixObj.tests.dll
Remco
#14 Posted : Tuesday, January 13, 2015 10:59:47 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi, thanks for posting!

You've correctly identified this as a workspace isolation issue. With the build reaching out of the workspace (into the TEMP dir), NCrunch wasn't able to isolate your build logic and it was overlapping/clashing. Adjusting the Outdir to a relative path is the correct solution to the first problem.

The handle accumulation problem you've described is not entirely clear to me, but looking at the dump you've shown, is it possible that each file lock is being assumed by a new task runner? If this is indeed what you are observing, then this is likely to be behaviour by design. NCrunch will leave test runner processes active and re-use them where possible. There are a number of configuration options that control this behaviour. Are the file locks creating problems for you? They should be isolated to the NCrunch workspaces. The lock being held above by PID 22968 is suspect and may be the result of another absolute file reference in your project .. I think you should examine the project to see if there is any logic that would try to reference ProcessWixObj.tests.dll using an absolute path.
lukesch
#15 Posted : Tuesday, January 13, 2015 11:19:58 PM(UTC)
Rank: Member

Groups: Registered
Joined: 5/10/2014(UTC)
Posts: 13

Thanks: 1 times
Was thanked: 1 time(s) in 1 post(s)
PID 22968 was the task runner from before I changed the OutDir from an absolute path. It's surprising to me that the test runner would keep a handle on the dll after it had completed running the tests. I understand why it wouldn't want to kill the test runner process itself... but why not let go of the dll once it's done using it?

Also, can NCrunch detect if its OutDir is an absolute vs relative path? If it's an absolute path, it should limit its test running to 1 thread (testrunner) and notify the user that they should fix it to enable faster testing.

Edit: Really, if it's an absolute path, NCrunch can't even attempt to build until the testrunner is complete and has exited (which it doesn't seem to do). I think NCrunch is totally broken by an absolute output path.
Remco
#16 Posted : Wednesday, January 14, 2015 3:56:17 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
lukesch;6770 wrote:
PID 22968 was the task runner from before I changed the OutDir from an absolute path. It's surprising to me that the test runner would keep a handle on the dll after it had completed running the tests. I understand why it wouldn't want to kill the test runner process itself... but why not let go of the dll once it's done using it?


NCrunch itself isn't responsible for holding the lock on the DLL - this is being done by the CLR. When the CLR loads an assembly into the .NET process, it opens the file and holds a read/write lock on it until the process is terminated. In this case, the DLL is likely being loaded because it contains executable code being run inside the process. Unfortunately there is no way for NCrunch to unload this DLL entirely without terminating the process (or rebuilding the application domain, which is categorically similar to terminating the process).

lukesch;6770 wrote:

Also, can NCrunch detect if its OutDir is an absolute vs relative path? If it's an absolute path, it should limit its test running to 1 thread (testrunner) and notify the user that they should fix it to enable faster testing.


Unfortunately, the derived logic in MSBuild makes this much more difficult than it might immediately seem. NCrunch does detect certain conditions that are sure to be absolute paths (for example, if the OutDir is pointing to a different disk).

Running builds or tests with an absolute path and limiting the concurrently unfortunately will create isolation problems. For example, if you ran your build in Visual Studio while NCrunch had a build running, they'd hit the same files and you could get some very random problems appearing that would be difficult to analyse.

lukesch;6770 wrote:

Edit: Really, if it's an absolute path, NCrunch can't even attempt to build until the testrunner is complete and has exited (which it doesn't seem to do). I think NCrunch is totally broken by an absolute output path.


This is correct. Absolute paths can create critical failures for tools like NCrunch that need to relocate and isolate your source code. Unfortunately because NCrunch has only a very limited understanding of the code it is executing (it's user code), it cannot fix all these problems automatically, and any attempt for it to do so would never be 100% reliable. The focus is instead on trying to make sure that people are aware that absolute paths do cause problems that need to be intelligently addressed by a human that understands the implications of changing them.
Users browsing this topic
Guest
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.148 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download