Welcome Guest! To enable all features please Login or Register.

Notification

Icon
Error

2 Pages12>
Login to database failed
cordoor
#1 Posted : Tuesday, October 2, 2012 5:00:09 PM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
I just installed NCrunch. This product seems promising. Unfortunately, it hasn't worked for me yet.

If I use the NUnit GUI to run my unit tests, they all run fine. However, when NCrunch runs them, they all fail. The failure is inside of a SetUpFixture SetUp operation while trying to connect to the database. The error message shows that it is trying to use the correct user name. Does NCrunch use windows authentication of the logged in user? Or is there a way for me to specify the user account under which it should run?

How do I get past this?

Thanks.

-Corey
Remco
#2 Posted : Tuesday, October 2, 2012 9:15:40 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi Corey,

Thanks for posting!

NCrunch will spawn test processes under the same user account you use to run Visual Studio, so in theory you should be able to give this user access to the database and the security should work without problems. NCrunch's test runner itself doesn't know anything about the behaviour of the code under test (i.e. there is no logic in NCrunch associated with database connections).

Where I've often seen issues with database connectivity is where the database connection string is being manipulated by code or by build steps. Note that NCrunch will automatically turn off any pre or post build events (you can configure this), so these events may be worth turning on if you're doing something like this.

It may be worth inspecting your .config file in the NCrunch workspace (right click a failed test, go to Advanced->Browse to workspace), to see if your connection string is correct in this file.


Cheers,

Remco
1 user thanked Remco for this useful post.
cordoor on 10/3/2012(UTC)
cordoor
#3 Posted : Tuesday, October 2, 2012 11:10:47 PM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Remco;2882 wrote:
Hi Corey,

Thanks for posting!

NCrunch will spawn test processes under the same user account you use to run Visual Studio, so in theory you should be able to give this user access to the database and the security should work without problems. NCrunch's test runner itself doesn't know anything about the behaviour of the code under test (i.e. there is no logic in NCrunch associated with database connections).

Where I've often seen issues with database connectivity is where the database connection string is being manipulated by code or by build steps. Note that NCrunch will automatically turn off any pre or post build events (you can configure this), so these events may be worth turning on if you're doing something like this.

It may be worth inspecting your .config file in the NCrunch workspace (right click a failed test, go to Advanced->Browse to workspace), to see if your connection string is correct in this file.


Cheers,

Remco



It's a pretty basic connection string and it is exactly the same when I run the unit test in the NUnit GUI as when NCrunch runs it. Here is the connection string:

server=(local), 1805;initial catalog=Users-TestAccount;integrated security=Yes;connection timeout=20;pooling=True;min pool size=2;max pool size=1000;application name=Build

I don't know how to proceed. The connection string is exactly correct in both cases. But in the NUnit case, it connects and everything works. In the NCrunch case, I get this exception:

Mozenda.Data.DataException : A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)

I know what the exception means. I just don't know why it would be failing in NCrunch but not in NUnit with the exact same connection string.

Thanks.

-Corey
Remco
#4 Posted : Tuesday, October 2, 2012 11:40:48 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
If the connection string is correct, then one possible point of difference I can think of here (as you've already stated) would be the user that is being assumed by the test runner during the execution of your test.

During its execution, NCrunch will set up test runners processes under the name 'nCrunch.Host.*.exe'. You should be able to see these inside Windows Task Manager. When you run the test under NUnit, does the assumed user match the same user as used by the NCrunch process? I believe NUnit has a number of options that may spawn a separate process in order to run the test, so make sure you compare the right executable.

I have experienced strange behaviour with the (local) memory based connections in SQL server before. It may be worth trying a different connection type (i.e. TCP loopback/localhost) to see if this creates a different result, as there may be other moving parts within the SQL client that are less than obvious.
1 user thanked Remco for this useful post.
cordoor on 10/3/2012(UTC)
cordoor
#5 Posted : Wednesday, October 3, 2012 5:27:24 AM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Remco;2890 wrote:
If the connection string is correct, then one possible point of difference I can think of here (as you've already stated) would be the user that is being assumed by the test runner during the execution of your test.

During its execution, NCrunch will set up test runners processes under the name 'nCrunch.Host.*.exe'. You should be able to see these inside Windows Task Manager. When you run the test under NUnit, does the assumed user match the same user as used by the NCrunch process? I believe NUnit has a number of options that may spawn a separate process in order to run the test, so make sure you compare the right executable.

I have experienced strange behaviour with the (local) memory based connections in SQL server before. It may be worth trying a different connection type (i.e. TCP loopback/localhost) to see if this creates a different result, as there may be other moving parts within the SQL client that are less than obvious.


1. The user being assumed by the test runner processes (nCrunch.BuildHost.x86.exe and nCrunch.TestHost.x86.exe) is the same user assugmed by the NUnit process (nunit-x86.exe). And it is the logged in user.

2. I changed the connection string to use the computer name instead: Computer21, 1805;... I verified the connection string was properly changed for both NCrunch and Nunit.

Both of these suggestions did not solve the problem.

What additional information can I give you that would help you track the problem down? It is likely to be something non-obvious. For example, this is a 64-bit Windows 7 operating system but my executables are x86 (I am building using strictly x86). Notice also that I am connecting on a non-standard SQL Server port (1805). Could there be a Windows firewall issue?

I would be surprised if it were *not* a bug somewhere in your stuff. The trick will be figuring out how to track it down.

Is anyone else having success connecting to their databases (specifically, SQL Server 2008) in their unit tests?

I wish I had better news than this.

-Corey
cordoor
#6 Posted : Wednesday, October 3, 2012 5:44:53 AM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Ok, I found a clue.

Please do not spend anymore time on this until I look into the clue and get back to you.

Thanks!

-Corey
1 user thanked cordoor for this useful post.
Remco on 10/3/2012(UTC)
cordoor
#7 Posted : Wednesday, October 3, 2012 6:50:50 AM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
I've been making progress. However, I am running across so many apparent bugs and limitations that I am quickly becoming disinterested in the product, which is a shame because the concept is such a good one. Perhaps I am inclined to spend 15 minutes writing this post because I understand how it feels to be working furiously on a buggy product so customers can use it (I'm doing this right now with a product I've launched with customers signed up and others begging to use it but there are too many bugs that we shouldn't really even be allowing them to be customers ;-).

I would like to see something come of this product because the idea is so fantastic. But where to begin. Perhaps with my workflow:

1. Our code base is used for multiple businesses. In our system, we place the files related to a business in a folder at the root of C: like this: C:\<BusinessName> where <BusinessName> is the name of the business. Our system will look at the name of this folder and use that for the base of the database name to which it connects. Because each of our customers gets their own physical database, and because we want to use connection pooling, when we connect, we first connect to the SQL Server master database, and then we do a "use" to switch to the appropriate database for the query. The exception was not being thrown when connecting to master, but when performing the use because the database name did not exist. The database name did not exist because NCrunch is copying files to C:\Users\... and so the system name became "Users". This was worked around, understandably, by placing our simple system config file in C:\Users\Programs so that the system knew where to initially connect. This was solved.

2. The next problem was that an initial setup phase of our unit tests creates a few databases on-the-fly if they do not already exist. These databases are used only for unit testing. And we like this type of automated process because then a developer doesn't have to remember to manually create them when they switch to a new machine or something like that. The problem is, our system needed database scripts to create these databases. The database scripts were, semi not-understandably, not copied over to the temp location in C:\Users even though these scripts are included in the projects as <Content> nodes in the project files. These should have been copied.

3. So my next step was to try to have them copied using the "Additional files to include" feature of the NCrunch project settings. So I clicked the "..." button, chose "Add Directory...", picked the directory, and saw it put "Scripts\**.*" in the list of additional files to include. I ran the test in debug mode and saw that files were not found because the working directly was entirely different than what our system is setup for, which leads to...

4. NCrunch did not respect the Output path defined on the project. Our projects were initially created with Visual Studio 2003. All of our .NET projects are built on the x86 platform, *not* the Any CPU platform. In Visual Studio 2003, the default output path for x86 platform projects was bin\x86\Debug, so that is what our output path is set to. But NCrunch appears to have used the new Visual Studio 2010 output path for this project platform (we are now using Visual Studio 2010) of bin\Debug. This meant that the relative path our code is using to find the scripts files was off by one level directory and couldn't find them.

It is at this point I gave up and headed to this forum to explain my progress.

Through this process I have had Visual Studio lock up three times. Each time I've had to go to task manager and kill each of the NCrunch processes and then once they are all killed, everything wakes up again.

And I've had to restart Visual Studio a couple of times. Then I noticed that the unit tests automatically run. I disabled that right away, since I'm in trouble-shooting mode, by choosing the "run all tests manually" mode. But restarting Visual Studio sets that back to automatic. The setting is not sticky.

Additionally, the circle markers in the code rarely update. They are almost always out of date. And there is no way to clear them out and have them represent only the one single unit test I have most recently ran so I can actually trouble-shoot where the crashing problem is occurring (without debugging). The exception information is not useful because you cannot narrow down the specific exception details for a specific test. Additionally, why aren't these details right in the tests panel?

I've seen inconsistent results in files copied. For example, one .NET project that consumes a C++/COM project, so there is a proxy DLL that is created/copied (or whatever), it was building just fine for several hours. Then suddenly, it no longer built. The problem was that the proxy file was not being copied (I'm sure you could have guessed this as the problem). So I went in and manually added it to "Additional files to include" and all was well. Why this happened, I don't know.

Another problem: I had a test that got stuck in the "Running" state. I could not kill it. Even exiting Visual Studio and coming back into Visual Studio did not solve the problem. Even killing all of the running NCrunch processes didn't solve the problem. I actually don't remember how I resolved it. Perhaps it was all of unloading the project, killing NCrunch processes, closing Visual Studio. I can't remember.

At any rate, I hope some of this information helps.

I will likely put this one on the shelf for a few months and come back to it when some of these sorts of issues are ironed out (unless you are quick to fix and release updates, then I would be willing to try again in a week or two).

Thanks again for responding quickly on these forums.

-Corey
cordoor
#8 Posted : Wednesday, October 3, 2012 7:04:52 AM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Ok, a quick change to the code to use a relative path specific to the output directory resolved the final issue. Now the unit tests properly pass.

So I am going now and now feel like I can do a bit better evaluation of the usefulness of the tool. I hope losing a day of work will be paid back in productivity gains with this thing ;-)

One thing that concerns me about the current UI regarding project configuration is that there is no way to immediately tell in the project list which projects have custom configuration and which don't, what that customer configuration is, etc. It would be nice to see indicators and so forth that tell me if the project is set to include pre/post build, if it has additional files, etc. Right now, I have to arrow key through all ~60 of the projects to remove a custom config I may no longer want (e.g. I figure out a way to get rid of a few post build steps to simplify things, but now I need to go remove NCrunch from doing that to the project).

Also, the actual configuration properties for a project really want to be to the right of the project list since there is a ton of unused space there. And it would be nice if the project whose properties are being viewed is highlighted in the list (instead of just a focus indicator, which vanishes when focus leaves that control).

A standard scroll bar would be nice, too.

Anyway, I'm able to move forward now and use the product how it was meant to be used (while doing work unrelated to getting the product going. hehe). I'll try it out the rest of this week.

Thanks again for responding on the forums. Good to know you're around :-)

-Corey
Remco
#9 Posted : Wednesday, October 3, 2012 8:46:27 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Great to hear you managed to find the problem. I find it interesting that SQL would throw an error like this over a relative path issue, although I guess with a view of the code this probably makes sense.

Thanks for all your feedback. Others have also requested making better use of the column space in the configuration view, so I might look a bit more into how this could be done tidily. There's a few other things I can let you know that may make your configuration experiences easier:

- It's possible to use multi-select when setting configuration settings... so if you have a setting that should be applied to all 60 projects (or a large group of them), this should make life easier
- The additional files to include setting is also present against the solution itself (in case you have additional files that are very common across all projects)
- v1.42 will include a change that shows a project-level warning if the project contains pre/post build steps that have been deactivated. I think this would also make things easier for the use case you've described

I wish you the best of luck with your evaluation. If you have any further problems, please feel free to share them on this forum.


Cheers,

Remco
1 user thanked Remco for this useful post.
cordoor on 10/3/2012(UTC)
cordoor
#10 Posted : Wednesday, October 3, 2012 8:56:56 AM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Remco;2899 wrote:
Great to hear you managed to find the problem. I find it interesting that SQL would throw an error like this over a relative path issue, although I guess with a view of the code this probably makes sense.

Thanks for all your feedback. Others have also requested making better use of the column space in the configuration view, so I might look a bit more into how this could be done tidily. There's a few other things I can let you know that may make your configuration experiences easier:

- It's possible to use multi-select when setting configuration settings... so if you have a setting that should be applied to all 60 projects (or a large group of them), this should make life easier
- The additional files to include setting is also present against the solution itself (in case you have additional files that are very common across all projects)
- v1.42 will include a change that shows a project-level warning if the project contains pre/post build steps that have been deactivated. I think this would also make things easier for the use case you've described

I wish you the best of luck with your evaluation. If you have any further problems, please feel free to share them on this forum.


Cheers,

Remco


Thanks for the response.

We continue to run into roadblocks, most of which seem solve-able if you're made aware of these kinds of issues that are happening.

When you copy files to the temporary location, you are not preserving the date/time stamp of the file as it exists in its original location. This is important to do for many reasons, but for us the most important is that we have an automated process that runs in the test fixture setup that checks the date/time stamp of all of the database scripts and compares them to their values in a system table in our database. If any of them have changed, then the database is dropped and rebuilt from the scripts.

I can see this happening for each unit test that is run: the database gets dropped and then re-created. This means:

1. The above is happening (the date/time of the files are not preserved (I verified this by looking in your temp directory).

2. You are executing the test fixture setup and tear down with each and every unit test instead of just once for all unit tests within the fixture (this goes against the way NUnit works). I can only assume this is happening because you are queuing the tests up to run individually when they really should be queued to run as a test fixture package, or you would need to find some way to re-use the fixture set up and tear down.

Anyway, this is an example of another thing that makes it difficult for us to transition to using NCrunch.

I hope these sorts of quirks and bugs can be worked out soon.

Thanks.
Remco
#11 Posted : Wednesday, October 3, 2012 9:19:17 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
When copying files into workspaces for processing, NCrunch will attempt to preserve the timestamp of any project file that is not a source file or has been opened by the IDE. The reasoning behind this is actually quite simple - the contents of the workspace are transient and equivalent to the contents of your project that are held in memory (for example, if you have a file open in the IDE and you make changes to it in memory, what should the date stamp be?). A logical solution to this could be to simply track the date/time every file in memory was changed in use this, although until now no one has reported problems with the existing logic and I'm curious as to the approach you've taken with the SQL scripts ... it seems to me that relying on the date/time stamp for SQL ordering could create huge opportunities for accidental reordering of execution (and not just by NCrunch!).

NCrunch relies on the logic of the underlying test framework to execute tests, although to preserve clean concurrency it will invoke the framework to execute tests in batches. NCrunch will re-use test processes between batches, which means that it's possible for a test (or its fixture) to be executed multiple times for the same application domain. There are many reasons for why this is done the way it is (a huge number of features rely on it), so unfortunately this is something that cannot be changed. My recommendation is to avoid writing tests that are sequence dependent or cannot be rerun within the same process.

Cheers,

Remco
1 user thanked Remco for this useful post.
cordoor on 10/3/2012(UTC)
Remco
#13 Posted : Wednesday, October 3, 2012 10:38:13 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Sorry - I just realised there was a post of yours above (the big one) that I missed reading ... I realise there was a bit of frustration behind this post, so I'll try my best to reply to this now.

Considering the level of complexity of your solution, it isn't a huge surprise to me that getting NCrunch to work for you is a challenge. The main features of NCrunch (workspacing, parallel execution, on-the-fly analysis) do impose new constraints on how tests should be engineered. While NCrunch has been hammered out over the last few years to try and make configuring it for a large solution as easy as possible, there are constraints that cannot be removed as to do so would remove the main features along with all the value that they create.

Anyway, as I'm sure you understand, NCrunch works by physically separating each of your projects into an independent workspace in order for the project to be processed independently and in isolation. This is a really major change to the way that solutions are normally built and executed, and not every solution was built with it in mind. When you factor in the complexity and history with a large solution such as yours (with likely many man-years of development behind it), it is impossible to predict exactly how the code will behave when first introduced to a tool such as NCrunch. Generally I try to encourage people to start with smaller solutions first in order to gain an understanding of how the tool works, as there is nothing more discouraging than looking out over a sea of red tests each presenting their own quirks and challenges. The integration test environment you've described has been built up progressively over time using non-continuous tooling and as such there may seem to be many parts that resist NCrunch with conviction.

I'm not saying this in an effort to vindicate NCrunch over what I can imagine has been an extremely frustrating and time consuming experience for you, but rather that it may be worth considering a different approach. While it appears simple at face value, the implications of the NCrunch engine for integration testing can take some time to learn, and tackling them head-on can create a bit of a 'do or die' scenario that can feel like it's going on forever.

I really recommend trying a slow transition into the use of a continuous test runner for this solution. Try ignoring (with NCrunch) all your integration tests to start with, and just using NCrunch to execute unit tests for a week or so. If nothing else, this will help you to evaluate the tool in a usable and trouble-free scenario to see whether it really is worth your while. As you learn more about the behaviour of the engine and discover how best to utilise it in your environment, you can start selectively turning on integration tests and examining how they behave while being executed continuously. In this way, you should also immediately become aware of any tests that may be causing excessive system load (as many of them are backing onto a database, the DBMS can do a fine job of degrading system performance if it's being hammered). This approach can also save you from stress-related disaster, as there is no telling how long it may take to get the integration tests running continuously and the last thing you need is to deal with pressing deadlines while evaluating a tool on the job.

I'll try to respond to each of the individual problems you've raised, and if you like, I will do my best to help you with troubleshooting them.

Problem 1: Absolute file paths to business directories - Your described solution to this is sensible. NCrunch doesn't deal well with absolute file paths as they prevent it from properly isolating components under test. Make sure the configuration file you've placed doesn't sit within any of the generated NCrunch directories, as these will be routinely removed (which could cause the loss of your configuration file).

Problem 2: This is a concern to me. Files that are included in the .proj XML should ALWAYS be copied into a workspace (especially if they are included with a Content tag). Are there any build or test steps you're aware of that may remove the files? There is a way to confirm whether the files have been copied by inspecting the NCrunch diagnostic log written out when the workspace is built. To view this log, you'll need to set the 'Log verbosity' to 'Detailed' under you global configuration, then reset the NCrunch engine (i.e. disable and enable). After the offending project has been built, go to the processing queue and select it in the list. You should see a large dump of text output in the lower pane. Look through entries in the log such as 'Writing new workspace member ....' to see if you can find evidence of the files that are missing. If the files are shown to be written in the diagnostic log, then this means that the files were copied to the workspace but subsequently removed by something else (misbehaving build or test logic perhaps?).

Problem 3: From your description, I think this is caused by Problem 4 ... let me know if there isn't the case and I'll help you delve a bit deeper into what happened here.

Problem 4: NCrunch will always build projects using the default MSBuild $(Configuration) specified in the .proj file, in the same manner as if you built the project using a command-line MSBuild invocation. From your description, I suspect that you have Visual Studio configured to inject a non-default configuration into the build, which means that your build experience between Visual Studio and NCrunch would be inconsistent. I would say it's a high priority to properly fix this issue before addressing any others, as your system would have been tested and developed with the configuration selected by Visual Studio in mind. There are two clean ways you can solve this - either you can change the default configuration inside the .proj files to be consistent with your Visual Studio configuration, or you can configure NCrunch to use the same configuration as Visual Studio. There is a project-level NCrunch configuration value called 'Use Build Configuration' that allows you to specify the name of the $(Configuration) property in free-form. Try selecting all the projects in the configuration pane and applying this setting in bulk. I really recommend against re-engineering your tests to work from a different output path, as this would create an inconsistency between your non-NCrunch testing environment and your NCrunch testing environment (which is undesirable!).

Visual Studio lockups: NCrunch doesn't make blocking calls to the task runner .exes from the IDE on the primary thread, so my feeling is that most likely these lockups are being caused somehow by test behaviour or through system load. Problems like this can be VERY hard to diagnose because when the lockups occur, it can be very difficult to get meaningful information about the state of the test runner or which tests are being executed. If this keeps happening, I recommend selectively ignoring sections of your tests with NCrunch to see if you can find a failure pattern. Something that may also help is if you want to try hooking a debugger onto the locked up Visual Studio instance itself and inspecting the stack traces of running threads - this could help to confirm or deny whether I can help solve the problem for you with an NCrunch fix.

'Run all tests manually' not being remembered: NCrunch stores your currently selected engine mode in a solution-specific file, i.e. 'MySolution.ncrunchsolution.user'. It will only save this engine mode on close of the solution or clean exit of Visual Studio. Check that the file isn't read-only (sometimes source control software can lock the file). Based on the previous issue you've described (VS lockups), I'm wondering if it might be possible that you haven't experienced a clean exit after setting the engine mode?

Circle markers not updating: This is a symptom of a solution that has many long running tests. If you haven't yet completed a full end-to-end test run with all tests passing (which I strongly suspect is the case), then NCrunch won't have much reliable information about how long each of your tests takes to execute. This means that the execution engine won't be able to make intelligent decisions about how to prioritise tests or threading/task resources, and it will probably be tying up too many of its tasks in long running tests. This issue should effectively solve itself once you've managed a full run through of the entire solution with all enabled tests behaving themselves properly. The processing queue window can be helpful in understanding the behaviour of the engine and why it may seem to be taking a long time to return results.

Inconsistent results in files copied: Can you share any more details about the proxy DLL you are using or how it is generated? Is it created using a build step? Is the DLL being generated using a cross-project dependency of some kind? This could be quite a complex situation that perhaps should be failing consistently and has been working due to a side-effect. I'm not sure if adding the file to the additional files to include setting is the right way to solve this problem.. If you can share some more details, I'll help you to work it out.

"Running" state test: There are certain error conditions that can create this scenario. They are rare and infrequent, and usually benign (i.e. the test will return to a normal state the next time it is run). If you can find a way to reproduce this issue, I'd really like to take a closer look at it. Most likely it was caused by an internal problem in the engine (in which case submitting a bug report may very well give me the information I need to fix it).

Thanks for all your effort in trying to get NCrunch to work on this solution. I appreciate going through this process with you as it better enables me to understand the sorts of pains people go through when first introducing the tool into such a complex environment. I hope I can continue to be of help in your implementation/evaluation of the tool.


Cheers,

Remco



1 user thanked Remco for this useful post.
cordoor on 10/3/2012(UTC)
cordoor
#12 Posted : Wednesday, October 3, 2012 3:30:01 PM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Remco;2903 wrote:
When copying files into workspaces for processing, NCrunch will attempt to preserve the timestamp of any project file that is not a source file or has been opened by the IDE. The reasoning behind this is actually quite simple - the contents of the workspace are transient and equivalent to the contents of your project that are held in memory (for example, if you have a file open in the IDE and you make changes to it in memory, what should the date stamp be?). A logical solution to this could be to simply track the date/time every file in memory was changed in use this, although until now no one has reported problems with the existing logic and I'm curious as to the approach you've taken with the SQL scripts ... it seems to me that relying on the date/time stamp for SQL ordering could create huge opportunities for accidental reordering of execution (and not just by NCrunch!).

NCrunch relies on the logic of the underlying test framework to execute tests, although to preserve clean concurrency it will invoke the framework to execute tests in batches. NCrunch will re-use test processes between batches, which means that it's possible for a test (or its fixture) to be executed multiple times for the same application domain. There are many reasons for why this is done the way it is (a huge number of features rely on it), so unfortunately this is something that cannot be changed. My recommendation is to avoid writing tests that are sequence dependent or cannot be rerun within the same process.

Cheers,

Remco


I think perhaps you didn't understand my post on this one because your response is confusing.

1. NCrunch is currently *not* preserving the time-stamp of the copied files. I can verify this by comparing the two folders. Specifically, the SQL script files need to have their time-stamp preserved. We have used this process for five years now without fail. It works because the SQL scripts rarely change and all the SQL scripts do is create a copy of our database (this itself is important to test because each of our customers gets their own physical database so we have thousands of them) and so when one of the scripts changes (this is rare) we need the test fixture to rebuild it (we don't want programmers having to worry at all about any of this). So I guess what I was saying is that NCrunch is *not* preserving time-stamps and this is a problem. So even though files can change in memory, none of these were ever opened in the IDE and so I would have thought at a minimum the date/time of the file would have stuck.

2. Regarding the test fixture set up and tear down, I said something wrong. I meant that we have a SetUpFixture SetUp and TearDown. Unlike the test fixture set up and tear down, these exist to be performed one time only before all the tests in the batch are run. The way NCrunch is implemented, it is as if I put the SetUpFixture setup and tear down code in the TextFixture as SetUp and TearDown. This is a major problem for us because we took advantage of this feature with NUnit because the setup and teardown for the batch can be relatively lengthy. I would have thought NCrunch would have respected this design feature of NUnit and at least, if I do something like start up all unit tests for a project, do these only once for the batch. I hope this makes sense.

As a note, none of our unit tests are sequence dependent at all. This isn't the problem. The problem is that the SetUpFixture SetUp and TearDown appear to be called for each test as if I started NUnit, ran one unit test, then stopped NUnit, then started NUnit, then ran another test, etc., whereas what is really happening is we want these run when we run the batch of unit tests per the NUnit design.

Thanks.

-Corey
cordoor
#14 Posted : Wednesday, October 3, 2012 4:10:05 PM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Remco;2904 wrote:
I really recommend trying a slow transition into the use of a continuous test runner for this solution. Try ignoring (with NCrunch) all your integration tests to start with, and just using NCrunch to execute unit tests for a week or so. If nothing else, this will help you to evaluate the tool in a usable and trouble-free scenario to see whether it really is worth your while. As you learn more about the behaviour of the engine and discover how best to utilise it in your environment, you can start selectively turning on integration tests and examining how they behave while being executed continuously. In this way, you should also immediately become aware of any tests that may be causing excessive system load (as many of them are backing onto a database, the DBMS can do a fine job of degrading system performance if it's being hammered). This approach can also save you from stress-related disaster, as there is no telling how long it may take to get the integration tests running continuously and the last thing you need is to deal with pressing deadlines while evaluating a tool on the job.


We consider all of the tests, even those that go against a database, to be unit tests. We don't have many. We have perhaps 700 unit tests. We can execute all of them in NUnit in about 15 seconds. So they aren't particularly long running as far as we're concerned.

I'm not convinced it is a big project getting NCrunch working with our stuff. I would say if the time/stamp issue were resolved and the SetUpFixture's SetUp and TearDown were called once each for a batch of unit tests (for example, all of those that are queued or whatever), then we are very close. Even with just the time/stamp issue resolved I think we are able to use your product.

We are not interested in using your product with only a portion of our unit tests (those that don't go against the database). For us, we will need it to work with all of our unit tests or it won't be something we will use.

Remco;2904 wrote:

Problem 1: Absolute file paths to business directories - Your described solution to this is sensible. NCrunch doesn't deal well with absolute file paths as they prevent it from properly isolating components under test. Make sure the configuration file you've placed doesn't sit within any of the generated NCrunch directories, as these will be routinely removed (which could cause the loss of your configuration file).


It wasn't an absolute file path issue. It was a conflict with the design of our system where we use the name of the root folder under, ultimately, which the source code is checked out, as the name of the system (because our code base is shared by two different companies/products). I resolved it with a code change and don't see that NCrunch needs to be changed in any way to resolve this issue.

Remco;2904 wrote:

Problem 2: This is a concern to me. Files that are included in the .proj XML should ALWAYS be copied into a workspace (especially if they are included with a Content tag). Are there any build or test steps you're aware of that may remove the files? There is a way to confirm whether the files have been copied by inspecting the NCrunch diagnostic log written out when the workspace is built. To view this log, you'll need to set the 'Log verbosity' to 'Detailed' under you global configuration, then reset the NCrunch engine (i.e. disable and enable). After the offending project has been built, go to the processing queue and select it in the list. You should see a large dump of text output in the lower pane. Look through entries in the log such as 'Writing new workspace member ....' to see if you can find evidence of the files that are missing. If the files are shown to be written in the diagnostic log, then this means that the files were copied to the workspace but subsequently removed by something else (misbehaving build or test logic perhaps?).


I will get back to you on this one.

Remco;2904 wrote:

Problem 3: From your description, I think this is caused by Problem 4 ... let me know if there isn't the case and I'll help you delve a bit deeper into what happened here.

Problem 4: NCrunch will always build projects using the default MSBuild $(Configuration) specified in the .proj file, in the same manner as if you built the project using a command-line MSBuild invocation. From your description, I suspect that you have Visual Studio configured to inject a non-default configuration into the build, which means that your build experience between Visual Studio and NCrunch would be inconsistent. I would say it's a high priority to properly fix this issue before addressing any others, as your system would have been tested and developed with the configuration selected by Visual Studio in mind. There are two clean ways you can solve this - either you can change the default configuration inside the .proj files to be consistent with your Visual Studio configuration, or you can configure NCrunch to use the same configuration as Visual Studio. There is a project-level NCrunch configuration value called 'Use Build Configuration' that allows you to specify the name of the $(Configuration) property in free-form. Try selecting all the projects in the configuration pane and applying this setting in bulk. I really recommend against re-engineering your tests to work from a different output path, as this would create an inconsistency between your non-NCrunch testing environment and your NCrunch testing environment (which is undesirable!).


I solved this one by adding a few lines of code to detect if the project was built into bin\Debug or bin\x86\Debug and adjust accordingly. It wasn't a big deal.

However, I would like to understand more about what you mean by "Visual Studio [is] configured to inject a non-default configuration into the build".

The "Active solution configuration" in the Configuration Manager for the solution is set to "Debug". The "Active solution platform" is set to our company name, which is a solution platform I created. Each of the .NET projects in this configuration are set to build on the x86 Platform (not
Any Cpu or anything else) and each of their configurations is set to Debug.

Our automated build process does not use MSBuild to build the solution. When we have used MSBuild to build the solution in the past, we have had problems. Perhaps this indicates there is a problem like what you are describing.

At any rate, the Output path for the Debug configuration of each of our projects is set to bin\x86\Debug, which was the default Visual Studio 2005 applied way back when we used it. Visual Studio 2008, I believe, changed this default to bin\Debug even for x86 built projects. So I suppose I could go through all of my projects and change this, but I'm not sure it is worth it at this point since I solved the issue with a few lines of code.

Remco;2904 wrote:

Visual Studio lockups: NCrunch doesn't make blocking calls to the task runner .exes from the IDE on the primary thread, so my feeling is that most likely these lockups are being caused somehow by test behaviour or through system load. Problems like this can be VERY hard to diagnose because when the lockups occur, it can be very difficult to get meaningful information about the state of the test runner or which tests are being executed. If this keeps happening, I recommend selectively ignoring sections of your tests with NCrunch to see if you can find a failure pattern. Something that may also help is if you want to try hooking a debugger onto the locked up Visual Studio instance itself and inspecting the stack traces of running threads - this could help to confirm or deny whether I can help solve the problem for you with an NCrunch fix.


I think this is a symptom of the date/time stamp issue (and consequently, the huge amounts of database load) I've brought up in a different thread.

Remco;2904 wrote:

'Run all tests manually' not being remembered: NCrunch stores your currently selected engine mode in a solution-specific file, i.e. 'MySolution.ncrunchsolution.user'. It will only save this engine mode on close of the solution or clean exit of Visual Studio. Check that the file isn't read-only (sometimes source control software can lock the file). Based on the previous issue you've described (VS lockups), I'm wondering if it might be possible that you haven't experienced a clean exit after setting the engine mode?


Could be. I'll watch out for this. Probably was never properly saved because of the other issues ultimately, I think, caused by the date/time stamp issue.

Remco;2904 wrote:

Circle markers not updating: This is a symptom of a solution that has many long running tests. If you haven't yet completed a full end-to-end test run with all tests passing (which I strongly suspect is the case), then NCrunch won't have much reliable information about how long each of your tests takes to execute. This means that the execution engine won't be able to make intelligent decisions about how to prioritise tests or threading/task resources, and it will probably be tying up too many of its tasks in long running tests. This issue should effectively solve itself once you've managed a full run through of the entire solution with all enabled tests behaving themselves properly. The processing queue window can be helpful in understanding the behaviour of the engine and why it may seem to be taking a long time to return results.


Yeah, this is likely a symptom of the date/time stamp issue (and consequently, huge amounts of database load).

Remco;2904 wrote:

Inconsistent results in files copied: Can you share any more details about the proxy DLL you are using or how it is generated? Is it created using a build step? Is the DLL being generated using a cross-project dependency of some kind? This could be quite a complex situation that perhaps should be failing consistently and has been working due to a side-effect. I'm not sure if adding the file to the additional files to include setting is the right way to solve this problem.. If you can share some more details, I'll help you to work it out.


I'll try to get you additional details on this one.

Remco;2904 wrote:

"Running" state test: There are certain error conditions that can create this scenario. They are rare and infrequent, and usually benign (i.e. the test will return to a normal state the next time it is run). If you can find a way to reproduce this issue, I'd really like to take a closer look at it. Most likely it was caused by an internal problem in the engine (in which case submitting a bug report may very well give me the information I need to fix it).


Could be another symptom of the date/time stamp issue.

Remco;2904 wrote:

Thanks for all your effort in trying to get NCrunch to work on this solution. I appreciate going through this process with you as it better enables me to understand the sorts of pains people go through when first introducing the tool into such a complex environment. I hope I can continue to be of help in your implementation/evaluation of the tool.


I would like to see this work with our environment. We are likely not doing things as non-standard as you would think. Ours is a very clean system and the unit test framework we have created to perform unit tests against our database is relatively simple and hands-off for our programmers. I think we just need the date/time stamp and SetUpFixture SetUp/TearDown issues resolved (actually, this last one might not even be a big deal if the date/time stamp issue were resolved).

I am the architect of all of our stuff and the founder/owner of the company, so I can make adjustments to our stuff on a whim whenever I want to try and get this going.

We would be a good customer. We would ultimately need at least 8 fixed seat licenses (the $259 variety... I think that is what you said the cost will be). Considering you are just getting your company off the ground, we would be an excellent early adopting paying customer if we could get this going with our environment.

-Corey
cordoor
#15 Posted : Wednesday, October 3, 2012 9:20:38 PM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Ok, I couldn't let this die.

In almost every case we've talked about, the real problem has been either me overlooking something or a minor something in one of our projects. Here are a few things I've found (the big ones):

Date/time stamp: The date/time stamp *is* being preserved. I triple dog checked this. The problem with the database being rebuilt every unit test was because of a couple of unit tests that were recently written that used an obsolete way (in our system) of connecting to the test database. I fixed these tests and removed the obsolete stuff. Now the tests are run very similar to (imperceptibly different) NUnit. I don't know why yesterday I thought the date/time stamp was not preserved. Was probably tired (still tired).

Copying of .sql scripts to NCrunch temp: This *is* happening, contrary to what I believed. After you said it should be happening, I removed the "Additional files to include" I had added for the directory of scripts, deleted the test databases, and watched it fail. I narrowed down the problem to a single script that was not included in the project. The script is now included, everything is fine.

So as of right now everything is working with NCrunch. I'll let you know how it goes in about a week.

Thanks for the help.

-Corey
Remco
#16 Posted : Wednesday, October 3, 2012 9:34:21 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Hi Corey -

We're starting to assemble quite a long chain of in and out replies, so I'll try to assemble the relevant pieces here and create a general plan of attack.

From what you've described, the time/stamp issue is the major show-stopper for you at the moment. Although I'm slightly confused about why NCrunch isn't preserving the time/stamp of unopened SQL scripts when setting up the workspace, I'm fairly certain I should be able to recreate a scenario like this and that any problems in this area should be fixable. v1.42 is due around mid-month, which creates an opportunity to include this fix inside a stable release. If you're willing, I'd really like to test the fix with you (by sharing an early build) prior to the full release of v1.42. I'll send you a message with an email address we can use to coordinate this if you're interested.

Regarding the pattern of multiple calls through to the test runner (and thus multiple executions of SetUpFixture/TearDownFixture), this one is unfortunately extremely hard to fix on the side of NCrunch. As described earlier, it is intrinsically tied to the manner in which NCrunch performs its batching of tests through re-used test processes. However, there is a very simple way to fix this inside your test code. If you define a static bool member on your SetUp and TearDown Fixtures, you can use this flag to determine whether the setup or teardown code has been executed within the test process. If it has been executed, you simply don't run it again. It might feel a bit tacky, though it should prevent any rerunning of setup/teardown code for the life of the task runner.

Regarding the bin\Debug issues, it seems as though you have this one under control. If changing the NCrunch 'Use build configuration' setting didn't have any effect on the output path, then probably this is being controlled by Visual Studio through the $(Platform) configuration value being injected into the build. NCrunch doesn't yet have a way to inject that Platform property into the build via its own configuration, so the only fix would be to change the defaults in each of your project files to align with the values selected in the Visual Studio build configuration. I expect this would also solve previous problems you've experienced with running MSBuild directly against your projects, and it could be something worth looking at in future in order to try and keep a clean house. Though if it works for you now, it may not be worth worrying about!

Regarding the other issues, if you're able to reproduce them and they give you grief, feel free to share more information about them and I'll happily work through them with you.

On a bit of a side note, I offer my congratulations for your success in building your company to the size and level of success you've reached. As someone just starting out, it seems to be a daunting task! :)

Cheers,

Remco

Remco
#18 Posted : Wednesday, October 3, 2012 9:38:35 PM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
Sorry, I think I was posting that last reply while yours was appearing in the forum!

Do let me know if I can be of any further help.


Cheers,

Remco
cordoor
#17 Posted : Thursday, October 4, 2012 4:50:45 AM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Remco;2914 wrote:

On a bit of a side note, I offer my congratulations for your success in building your company to the size and level of success you've reached. As someone just starting out, it seems to be a daunting task! :)


Thanks! I appreciate the comment.

I suppose starting a company isn't the hardest part, it is getting a product to market and in the hands of customers. It is perhaps the most difficult thing I have ever done and I think right now you are experiencing this ;-) The core company of our business is www.Mozenda.com. The company we recently launched is www.PricingTrends.com. They've been fun, but tremendous amounts of work.

I learned about NCrunch only a few days ago through some forum post somewhere I don't remember how or why. It was essentially a footnote in some comment on Stack Overflow or something. I discovered, I believe, because I was trying to figure out how to get Visual Studio to quit building .NET projects I knew had not changed.

When I read about what NCrunch does, my excitement for the product went through the roof! There are so many enhancements and possibilities for you to pursue, you will be busy for a long time to come and you should establish quickly a very nice customer base. I think you will have a lot of fun, although it will be oodles of work.

A few weeks ago, I looked at Visual Studio 2012 for the first time (too busy to do otherwise). I was utterly disappointed. In a single day, we decided we weren't going to upgrade because it did not offer any compelling new features that actually help engineering departments be more productive, especially those building products. And, they actually *removed* features we've loved, such as web deployment projects (perfect for automated build processes). And the color scheme and icons were atrocious. Literally, my eyes were hurting after about 20 minutes of trying to navigate our projects and differentiate between windows (e.g. the borders are so light you can't readily tell where they separate).

Looking at Visual Studio 2012 was, for the most part, the final nail in the coffin for Microsoft as far as I was concerned. Their developer tools have always been their bread and butter and every release of Visual Studio, until 2012, was such a fantastic delight that they've been able to maintain a nice edge in that area while the rest of their products have stumbled and become second class. But 2012 was not the same for me. It was extremely disappointing.

Then, when I stumbled across NCrunch, I immediately called in a few co-workers to take a look and to share with them one of the very first thoughts that came to mind: Back in "the day" when Microsoft was at the top of their game (pretty much pre-Ballmer), NCrunch would have been *exactly* the sort of feature/functionality they would include in a new major release! There's no way they include something as useful as NCrunch now. They are too lost. How else do you explain the pathetic three minor new features they added to their Javascript development support?

I'm excited for you my brother. I haven't been this excited about a product since Dropbox was first released :-) I apologize for taking so much of your time over the past day on issues that ended up being in my stuff. But I was just so excited to get it working I spewed out whatever I could hoping to get things resolved quickly so I could use it.

Thanks again for the help. See you around the forums.

-Corey
Remco
#19 Posted : Thursday, October 4, 2012 5:10:38 AM(UTC)
Rank: NCrunch Developer

Groups: Administrators
Joined: 4/16/2011(UTC)
Posts: 7,144

Thanks: 959 times
Was thanked: 1290 time(s) in 1196 post(s)
I guess it wouldn't be a surprise to you to know that your post brought a big beaming smile to my face :) Much of what has kept me going on this project (and WILL no doubt keep me going) is the same excitement you've described ... about discovering a whole new product area that is almost completely untapped, and finding a way to make it something really special. I often get the feeling that NCrunch itself is a bit of a manifestation of the solution to almost every frustration I've experienced as a test driven developer, and it brings enormous satisfaction to see the way it can relieve those frustrations not just for myself, but also other developers.

Finally making the decision to take the product commercial was a very difficult one for me, as I'm aware that this choice will take it out of the hands of many people who cannot afford it or justify the cost, but the raw potential of the tool is just too incredible to ignore, and there's no way to get there without the resources to make things happen. I expect that there will be considerable change over the next few weeks as the implications of this decision sink in - based on your own business experience, I'm sure this is something you're well familiar with :)

Anyway, I hope that NCrunch is behaving itself well for you now and that you're already saving precious development time and gaining satisfaction with fields of green markers. If all goes well for NCrunch over the next two months, there will be much more to come over the next few years .. and I very much hope to have you along for the ride!


Cheers,

Remco
cordoor
#20 Posted : Thursday, October 4, 2012 5:43:31 AM(UTC)
Rank: Member

Groups: Registered
Joined: 10/2/2012(UTC)
Posts: 15
Location: Orem, UT

Thanks: 7 times
Was thanked: 1 time(s) in 1 post(s)
Your decision to go commercial is the right one. You are essentially raising capital to improve the product and grow a business without having to go out and raise money (very difficult and not fun). You will have control of the business. You will hire programmers to add features more quickly, which benefits those who value what you provide.

I don't know what your free customer base is, but I suspect a significant portion of them will almost immediately convert to happy paying customers. Then you will take a significant portion of the money you "raise" through sales and hire additional developers, support staff (who can deal with time-consuming people like me on your forums so you can focus on building product ;-), and so forth.

Another way to look at it is from the perspective of someone like me: Is it worth it to me to contribute $250 so that more productivity-enhancing features can be added more quickly? It is a no-brainer. If my trial works out, we will pay $250 for each of our developers and not bat an eye. Continuing free would mean fewer new features spread over a longer period of time and that is not interesting to someone like me. I want all of the possible features as soon as possible and that is not something you will be able to do alone and I would be surprised if you could find employees to work for you for free.

One advantage I see to this product that may be overlooked by most is the product's potential to influence developers to actually write unit tests. Over the years, I have found programmers generally don't write unit tests even when encouraged over and over to do so. This is counter-intuitive to me because writing unit tests itself is programming, which is more fun than clicking around on dialogs to test things. I think a big part of why programmers don't write unit tests is their perception that writing unit tests takes longer than just starting the program and running through 10 click-around tests on a dialog. Probably, in the short term, clicking around is faster. In the long term, it is slower because regression issues crop up when code changes. But almost nobody in this world is a long term thinker. Programmers are no different. So, most of them don't write unit tests. I am probably the author of over 95% of the unit tests in our system, hehe.

NCrunch has the potential to change this because of all of the annoying perceived time-consuming issues it solves... like build times. With NCrunch, writing unit tests may just become *perceived* to take less time then starting the app and clicking around. It will probably actually take less time too, but it's really the perception that counts. This is why I am so excited about NCrunch. It could very well change programmers' perception and that will mean they will finally write unit tests because they will think it is faster.

-Corey
Users browsing this topic
Guest
2 Pages12>
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

YAF | YAF © 2003-2011, Yet Another Forum.NET
This page was generated in 0.362 seconds.
Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download